Test Report: Docker_Linux_crio 21832

                    
                      e7c87104757589f66628ccdf942f4e049b607564:2025-11-01:42155
                    
                

Test fail (38/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.26
35 TestAddons/parallel/Registry 14.94
36 TestAddons/parallel/RegistryCreds 0.5
37 TestAddons/parallel/Ingress 147.62
38 TestAddons/parallel/InspektorGadget 5.29
39 TestAddons/parallel/MetricsServer 5.36
41 TestAddons/parallel/CSI 42.6
42 TestAddons/parallel/Headlamp 2.79
43 TestAddons/parallel/CloudSpanner 5.32
44 TestAddons/parallel/LocalPath 11.21
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 5.27
47 TestAddons/parallel/AmdGpuDevicePlugin 6.31
97 TestFunctional/parallel/ServiceCmdConnect 603.15
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.61
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.95
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.96
142 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.83
143 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.24
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
153 TestFunctional/parallel/ServiceCmd/Format 0.56
154 TestFunctional/parallel/ServiceCmd/URL 0.56
191 TestJSONOutput/pause/Command 2.1
197 TestJSONOutput/unpause/Command 1.5
248 TestPreload 437.68
263 TestPause/serial/Pause 6.18
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.25
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.28
311 TestStartStop/group/old-k8s-version/serial/Pause 6.74
314 TestStartStop/group/no-preload/serial/Pause 7.9
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.34
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.78
332 TestStartStop/group/newest-cni/serial/Pause 6.41
335 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.37
350 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.88
356 TestStartStop/group/embed-certs/serial/Pause 7.07
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable volcano --alsologtostderr -v=1: exit status 11 (263.654534ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:24.962919  527395 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:24.963239  527395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:24.963256  527395 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:24.963260  527395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:24.963495  527395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:31:24.963761  527395 mustload.go:66] Loading cluster: addons-050432
	I1101 09:31:24.964142  527395 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:24.964163  527395 addons.go:607] checking whether the cluster is paused
	I1101 09:31:24.964245  527395 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:24.964271  527395 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:31:24.964652  527395 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:31:24.983504  527395 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:24.983571  527395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:31:25.001299  527395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:31:25.102002  527395 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:25.102111  527395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:25.132750  527395 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:31:25.132771  527395 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:31:25.132775  527395 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:31:25.132778  527395 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:31:25.132780  527395 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:31:25.132786  527395 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:31:25.132788  527395 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:31:25.132802  527395 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:31:25.132805  527395 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:31:25.132811  527395 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:31:25.132813  527395 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:31:25.132816  527395 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:31:25.132819  527395 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:31:25.132821  527395 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:31:25.132823  527395 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:31:25.132829  527395 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:31:25.132849  527395 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:31:25.132854  527395 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:31:25.132858  527395 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:31:25.132862  527395 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:31:25.132866  527395 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:31:25.132870  527395 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:31:25.132874  527395 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:31:25.132878  527395 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:31:25.132882  527395 cri.go:89] found id: ""
	I1101 09:31:25.132927  527395 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:25.147963  527395 out.go:203] 
	W1101 09:31:25.149047  527395 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:25.149065  527395 out.go:285] * 
	* 
	W1101 09:31:25.152489  527395 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:25.153621  527395 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.237151ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-tdrzt" [a03b3b38-efc6-4b4e-ab7b-ca924913d632] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002789768s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-ftdnb" [3e5edf9d-0dac-458d-b44e-7564cf6619c5] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004003162s
addons_test.go:392: (dbg) Run:  kubectl --context addons-050432 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-050432 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-050432 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.456662885s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 ip
2025/11/01 09:31:49 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable registry --alsologtostderr -v=1: exit status 11 (256.666639ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:49.745062  529859 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:49.745440  529859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:49.745453  529859 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:49.745461  529859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:49.745688  529859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:31:49.746006  529859 mustload.go:66] Loading cluster: addons-050432
	I1101 09:31:49.746403  529859 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:49.746426  529859 addons.go:607] checking whether the cluster is paused
	I1101 09:31:49.746532  529859 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:49.746557  529859 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:31:49.747122  529859 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:31:49.764648  529859 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:49.764739  529859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:31:49.782424  529859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:31:49.883033  529859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:49.883122  529859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:49.915466  529859 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:31:49.915487  529859 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:31:49.915490  529859 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:31:49.915494  529859 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:31:49.915496  529859 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:31:49.915499  529859 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:31:49.915514  529859 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:31:49.915517  529859 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:31:49.915519  529859 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:31:49.915525  529859 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:31:49.915528  529859 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:31:49.915531  529859 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:31:49.915533  529859 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:31:49.915536  529859 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:31:49.915538  529859 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:31:49.915542  529859 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:31:49.915547  529859 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:31:49.915550  529859 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:31:49.915552  529859 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:31:49.915555  529859 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:31:49.915557  529859 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:31:49.915559  529859 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:31:49.915561  529859 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:31:49.915563  529859 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:31:49.915566  529859 cri.go:89] found id: ""
	I1101 09:31:49.915602  529859 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:49.930651  529859 out.go:203] 
	W1101 09:31:49.932743  529859 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:49.932765  529859 out.go:285] * 
	* 
	W1101 09:31:49.936140  529859 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:49.937279  529859 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.94s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.5s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.020202ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-050432
addons_test.go:332: (dbg) Run:  kubectl --context addons-050432 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (286.332007ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:51.824826  530228 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:51.825244  530228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:51.825258  530228 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:51.825265  530228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:51.825614  530228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:31:51.826030  530228 mustload.go:66] Loading cluster: addons-050432
	I1101 09:31:51.826653  530228 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:51.826680  530228 addons.go:607] checking whether the cluster is paused
	I1101 09:31:51.826822  530228 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:51.826859  530228 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:31:51.827492  530228 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:31:51.850649  530228 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:51.850696  530228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:31:51.871765  530228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:31:51.975980  530228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:51.976070  530228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:52.008959  530228 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:31:52.008994  530228 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:31:52.008999  530228 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:31:52.009004  530228 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:31:52.009008  530228 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:31:52.009012  530228 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:31:52.009016  530228 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:31:52.009039  530228 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:31:52.009044  530228 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:31:52.009053  530228 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:31:52.009061  530228 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:31:52.009065  530228 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:31:52.009069  530228 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:31:52.009074  530228 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:31:52.009078  530228 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:31:52.009086  530228 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:31:52.009093  530228 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:31:52.009099  530228 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:31:52.009102  530228 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:31:52.009105  530228 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:31:52.009109  530228 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:31:52.009113  530228 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:31:52.009118  530228 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:31:52.009127  530228 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:31:52.009132  530228 cri.go:89] found id: ""
	I1101 09:31:52.009186  530228 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:52.023417  530228 out.go:203] 
	W1101 09:31:52.025939  530228 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:52.025958  530228 out.go:285] * 
	* 
	W1101 09:31:52.029013  530228 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:52.030096  530228 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.50s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-050432 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-050432 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-050432 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [a81df693-1c6b-497d-a427-99b6e87746d6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [a81df693-1c6b-497d-a427-99b6e87746d6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004019885s
I1101 09:31:58.906729  517687 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.848923153s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-050432 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-050432
helpers_test.go:243: (dbg) docker inspect addons-050432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339",
	        "Created": "2025-11-01T09:29:12.30404353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519756,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:29:12.333366701Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339/hostname",
	        "HostsPath": "/var/lib/docker/containers/52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339/hosts",
	        "LogPath": "/var/lib/docker/containers/52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339/52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339-json.log",
	        "Name": "/addons-050432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-050432:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-050432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339",
	                "LowerDir": "/var/lib/docker/overlay2/002d67978d79bc0f2e4490bb5ec289013fa9e74d90b8eeb7652b0c6eddbb2c5b-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/002d67978d79bc0f2e4490bb5ec289013fa9e74d90b8eeb7652b0c6eddbb2c5b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/002d67978d79bc0f2e4490bb5ec289013fa9e74d90b8eeb7652b0c6eddbb2c5b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/002d67978d79bc0f2e4490bb5ec289013fa9e74d90b8eeb7652b0c6eddbb2c5b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-050432",
	                "Source": "/var/lib/docker/volumes/addons-050432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-050432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-050432",
	                "name.minikube.sigs.k8s.io": "addons-050432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "65d70f781156ed99f9d651bb0f1904a09cf6efefa7f0f3f91a0b2cb1c535e1a9",
	            "SandboxKey": "/var/run/docker/netns/65d70f781156",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-050432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:db:f1:a1:2a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "689a180e30ba3609142ebebf73973e7d729fe8df59d4790f17d3a3d8905bbd97",
	                    "EndpointID": "f0d317fa5bcaf94257636a0dd65fefcc78aded5ebf19ba459bbb69652b69140d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-050432",
	                        "52f6d966a5c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-050432 -n addons-050432
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-050432 logs -n 25: (1.228428841s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-679292 --alsologtostderr --binary-mirror http://127.0.0.1:45711 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-679292 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ -p binary-mirror-679292                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-679292 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ addons  │ enable dashboard -p addons-050432                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-050432                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ start   │ -p addons-050432 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ addons-050432 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ ssh     │ addons-050432 ssh cat /opt/local-path-provisioner/pvc-611c46b8-835f-4e6f-b58e-711be421d3e5_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable headlamp -p addons-050432 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ ip      │ addons-050432 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ addons-050432 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-050432                                                                                                                                                                                                                                                                                                                                                                                           │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ addons-050432 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ ssh     │ addons-050432 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │                     │
	│ addons  │ addons-050432 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │                     │
	│ ip      │ addons-050432 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-050432        │ jenkins │ v1.37.0 │ 01 Nov 25 09:34 UTC │ 01 Nov 25 09:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:28:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:28:49.738334  519099 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:28:49.738433  519099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:49.738439  519099 out.go:374] Setting ErrFile to fd 2...
	I1101 09:28:49.738443  519099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:49.738626  519099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:28:49.739202  519099 out.go:368] Setting JSON to false
	I1101 09:28:49.740139  519099 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7867,"bootTime":1761981463,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:28:49.740240  519099 start.go:143] virtualization: kvm guest
	I1101 09:28:49.770072  519099 out.go:179] * [addons-050432] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:28:49.832323  519099 notify.go:221] Checking for updates...
	I1101 09:28:49.832368  519099 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 09:28:49.892871  519099 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:28:49.916065  519099 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 09:28:49.989071  519099 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 09:28:50.050975  519099 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:28:50.073294  519099 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:28:50.155507  519099 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:28:50.178194  519099 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:28:50.178295  519099 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:50.237658  519099 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-01 09:28:50.226509872 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:28:50.237781  519099 docker.go:319] overlay module found
	I1101 09:28:50.311483  519099 out.go:179] * Using the docker driver based on user configuration
	I1101 09:28:50.394643  519099 start.go:309] selected driver: docker
	I1101 09:28:50.394673  519099 start.go:930] validating driver "docker" against <nil>
	I1101 09:28:50.394722  519099 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:28:50.395403  519099 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:50.456471  519099 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-01 09:28:50.446986286 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:28:50.456653  519099 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:28:50.456884  519099 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:28:50.478719  519099 out.go:179] * Using Docker driver with root privileges
	I1101 09:28:50.519824  519099 cni.go:84] Creating CNI manager for ""
	I1101 09:28:50.519938  519099 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:28:50.519951  519099 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:28:50.520046  519099 start.go:353] cluster config:
	{Name:addons-050432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-050432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 09:28:50.527861  519099 out.go:179] * Starting "addons-050432" primary control-plane node in "addons-050432" cluster
	I1101 09:28:50.528922  519099 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:28:50.529959  519099 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:28:50.530859  519099 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:28:50.530900  519099 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:28:50.530910  519099 cache.go:59] Caching tarball of preloaded images
	I1101 09:28:50.530970  519099 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:28:50.531011  519099 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:28:50.531022  519099 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:28:50.531428  519099 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/config.json ...
	I1101 09:28:50.531452  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/config.json: {Name:mk13bc5aaa312233e0b39caae472a4ee7166ba6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:50.547884  519099 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:28:50.548002  519099 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:28:50.548019  519099 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 09:28:50.548025  519099 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 09:28:50.548033  519099 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 09:28:50.548040  519099 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 09:29:03.621680  519099 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 09:29:03.621728  519099 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:29:03.621774  519099 start.go:360] acquireMachinesLock for addons-050432: {Name:mk85ed1bbc2ce61443a1b4bdfd37e48e9bf1adde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:29:03.621920  519099 start.go:364] duration metric: took 118.99µs to acquireMachinesLock for "addons-050432"
	I1101 09:29:03.621959  519099 start.go:93] Provisioning new machine with config: &{Name:addons-050432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-050432 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:29:03.622071  519099 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:29:03.624186  519099 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 09:29:03.624452  519099 start.go:159] libmachine.API.Create for "addons-050432" (driver="docker")
	I1101 09:29:03.624495  519099 client.go:173] LocalClient.Create starting
	I1101 09:29:03.624598  519099 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem
	I1101 09:29:03.846918  519099 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem
	I1101 09:29:04.148716  519099 cli_runner.go:164] Run: docker network inspect addons-050432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:29:04.166906  519099 cli_runner.go:211] docker network inspect addons-050432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:29:04.167005  519099 network_create.go:284] running [docker network inspect addons-050432] to gather additional debugging logs...
	I1101 09:29:04.167029  519099 cli_runner.go:164] Run: docker network inspect addons-050432
	W1101 09:29:04.184106  519099 cli_runner.go:211] docker network inspect addons-050432 returned with exit code 1
	I1101 09:29:04.184144  519099 network_create.go:287] error running [docker network inspect addons-050432]: docker network inspect addons-050432: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-050432 not found
	I1101 09:29:04.184167  519099 network_create.go:289] output of [docker network inspect addons-050432]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-050432 not found
	
	** /stderr **
	I1101 09:29:04.184263  519099 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:29:04.201793  519099 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b606e0}
	I1101 09:29:04.201874  519099 network_create.go:124] attempt to create docker network addons-050432 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 09:29:04.201932  519099 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-050432 addons-050432
	I1101 09:29:04.263438  519099 network_create.go:108] docker network addons-050432 192.168.49.0/24 created
	I1101 09:29:04.263480  519099 kic.go:121] calculated static IP "192.168.49.2" for the "addons-050432" container
	I1101 09:29:04.263547  519099 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:29:04.281105  519099 cli_runner.go:164] Run: docker volume create addons-050432 --label name.minikube.sigs.k8s.io=addons-050432 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:29:04.301066  519099 oci.go:103] Successfully created a docker volume addons-050432
	I1101 09:29:04.301164  519099 cli_runner.go:164] Run: docker run --rm --name addons-050432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-050432 --entrypoint /usr/bin/test -v addons-050432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:29:07.874075  519099 cli_runner.go:217] Completed: docker run --rm --name addons-050432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-050432 --entrypoint /usr/bin/test -v addons-050432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (3.572628129s)
	I1101 09:29:07.874120  519099 oci.go:107] Successfully prepared a docker volume addons-050432
	I1101 09:29:07.874157  519099 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:29:07.874189  519099 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:29:07.874256  519099 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-050432:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:29:12.233594  519099 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-050432:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.359290404s)
	I1101 09:29:12.233631  519099 kic.go:203] duration metric: took 4.359438658s to extract preloaded images to volume ...
	W1101 09:29:12.233730  519099 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:29:12.233771  519099 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:29:12.233823  519099 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:29:12.288579  519099 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-050432 --name addons-050432 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-050432 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-050432 --network addons-050432 --ip 192.168.49.2 --volume addons-050432:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:29:12.549388  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Running}}
	I1101 09:29:12.568132  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:12.586104  519099 cli_runner.go:164] Run: docker exec addons-050432 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:29:12.631389  519099 oci.go:144] the created container "addons-050432" has a running status.
	I1101 09:29:12.631443  519099 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa...
	I1101 09:29:12.997200  519099 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:29:13.023690  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:13.041301  519099 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:29:13.041324  519099 kic_runner.go:114] Args: [docker exec --privileged addons-050432 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:29:13.086315  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:13.104632  519099 machine.go:94] provisionDockerMachine start ...
	I1101 09:29:13.104767  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:13.123188  519099 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:13.123512  519099 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1101 09:29:13.123530  519099 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:29:13.265332  519099 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-050432
	
	I1101 09:29:13.265367  519099 ubuntu.go:182] provisioning hostname "addons-050432"
	I1101 09:29:13.265457  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:13.283079  519099 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:13.283322  519099 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1101 09:29:13.283346  519099 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-050432 && echo "addons-050432" | sudo tee /etc/hostname
	I1101 09:29:13.435808  519099 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-050432
	
	I1101 09:29:13.435929  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:13.453396  519099 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:13.453653  519099 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1101 09:29:13.453678  519099 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-050432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-050432/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-050432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:29:13.594654  519099 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:29:13.594687  519099 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 09:29:13.594755  519099 ubuntu.go:190] setting up certificates
	I1101 09:29:13.594774  519099 provision.go:84] configureAuth start
	I1101 09:29:13.594855  519099 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-050432
	I1101 09:29:13.612518  519099 provision.go:143] copyHostCerts
	I1101 09:29:13.612600  519099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 09:29:13.612734  519099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 09:29:13.612833  519099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 09:29:13.612931  519099 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.addons-050432 san=[127.0.0.1 192.168.49.2 addons-050432 localhost minikube]
	I1101 09:29:13.785748  519099 provision.go:177] copyRemoteCerts
	I1101 09:29:13.785815  519099 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:29:13.785865  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:13.804072  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:13.905258  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:29:13.925154  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:29:13.942789  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:29:13.960753  519099 provision.go:87] duration metric: took 365.964817ms to configureAuth
	I1101 09:29:13.960782  519099 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:29:13.960986  519099 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:13.961117  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:13.979133  519099 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:13.979355  519099 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1101 09:29:13.979376  519099 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:29:14.231624  519099 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:29:14.231648  519099 machine.go:97] duration metric: took 1.126974312s to provisionDockerMachine
	I1101 09:29:14.231663  519099 client.go:176] duration metric: took 10.607158949s to LocalClient.Create
	I1101 09:29:14.231687  519099 start.go:167] duration metric: took 10.607235481s to libmachine.API.Create "addons-050432"
	I1101 09:29:14.231697  519099 start.go:293] postStartSetup for "addons-050432" (driver="docker")
	I1101 09:29:14.231713  519099 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:29:14.231783  519099 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:29:14.231852  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:14.249683  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:14.352128  519099 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:29:14.355964  519099 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:29:14.355998  519099 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:29:14.356011  519099 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 09:29:14.356083  519099 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 09:29:14.356116  519099 start.go:296] duration metric: took 124.412164ms for postStartSetup
	I1101 09:29:14.356534  519099 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-050432
	I1101 09:29:14.373461  519099 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/config.json ...
	I1101 09:29:14.373733  519099 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:29:14.373799  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:14.390766  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:14.489100  519099 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:29:14.493864  519099 start.go:128] duration metric: took 10.871749569s to createHost
	I1101 09:29:14.493892  519099 start.go:83] releasing machines lock for "addons-050432", held for 10.871953912s
	I1101 09:29:14.493967  519099 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-050432
	I1101 09:29:14.511350  519099 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:29:14.511392  519099 ssh_runner.go:195] Run: cat /version.json
	I1101 09:29:14.511451  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:14.511453  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:14.531542  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:14.531913  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:14.687667  519099 ssh_runner.go:195] Run: systemctl --version
	I1101 09:29:14.694982  519099 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:29:14.730130  519099 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:29:14.734887  519099 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:29:14.734959  519099 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:29:14.760618  519099 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:29:14.760646  519099 start.go:496] detecting cgroup driver to use...
	I1101 09:29:14.760687  519099 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:29:14.760740  519099 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:29:14.777584  519099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:29:14.790785  519099 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:29:14.790861  519099 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:29:14.808054  519099 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:29:14.826708  519099 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:29:14.911264  519099 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:29:15.001175  519099 docker.go:234] disabling docker service ...
	I1101 09:29:15.001247  519099 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:29:15.021563  519099 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:29:15.034872  519099 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:29:15.119011  519099 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:29:15.202571  519099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:29:15.216240  519099 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:29:15.231527  519099 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:29:15.231588  519099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.242082  519099 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:29:15.242151  519099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.251321  519099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.260363  519099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.269453  519099 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:29:15.278022  519099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.287220  519099 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.301441  519099 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.310783  519099 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:29:15.318554  519099 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:29:15.326193  519099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:29:15.401515  519099 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:29:15.512813  519099 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:29:15.512914  519099 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:29:15.517021  519099 start.go:564] Will wait 60s for crictl version
	I1101 09:29:15.517091  519099 ssh_runner.go:195] Run: which crictl
	I1101 09:29:15.520706  519099 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:29:15.547235  519099 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:29:15.547348  519099 ssh_runner.go:195] Run: crio --version
	I1101 09:29:15.576174  519099 ssh_runner.go:195] Run: crio --version
	I1101 09:29:15.606970  519099 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:29:15.607906  519099 cli_runner.go:164] Run: docker network inspect addons-050432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:29:15.625198  519099 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:29:15.629643  519099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:29:15.640409  519099 kubeadm.go:884] updating cluster {Name:addons-050432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-050432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:29:15.640585  519099 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:29:15.640659  519099 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:29:15.674281  519099 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:29:15.674305  519099 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:29:15.674353  519099 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:29:15.700405  519099 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:29:15.700431  519099 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:29:15.700440  519099 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 09:29:15.700585  519099 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-050432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-050432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:29:15.700683  519099 ssh_runner.go:195] Run: crio config
	I1101 09:29:15.747539  519099 cni.go:84] Creating CNI manager for ""
	I1101 09:29:15.747565  519099 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:29:15.747587  519099 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:29:15.747612  519099 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-050432 NodeName:addons-050432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:29:15.747735  519099 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-050432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:29:15.747795  519099 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:29:15.756355  519099 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:29:15.756445  519099 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:29:15.764531  519099 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 09:29:15.777214  519099 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:29:15.792198  519099 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 09:29:15.805652  519099 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:29:15.809613  519099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:29:15.820042  519099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:29:15.900906  519099 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:29:15.925649  519099 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432 for IP: 192.168.49.2
	I1101 09:29:15.925679  519099 certs.go:195] generating shared ca certs ...
	I1101 09:29:15.925703  519099 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:15.926454  519099 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 09:29:16.022046  519099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt ...
	I1101 09:29:16.022082  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt: {Name:mk63d01b6c9e98cfdc58d5d995f045e109b91fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.022294  519099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key ...
	I1101 09:29:16.022311  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key: {Name:mk1c088d57a76aec79a4679eab5d0c5fe88c7b8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.022423  519099 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 09:29:16.215738  519099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt ...
	I1101 09:29:16.215772  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt: {Name:mkef3abd4e19242659ffaf335c2eefaa2d410609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.215990  519099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key ...
	I1101 09:29:16.216007  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key: {Name:mk2907020cf1dfded2b6a38c835cffcdebe60893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.216117  519099 certs.go:257] generating profile certs ...
	I1101 09:29:16.216181  519099 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.key
	I1101 09:29:16.216197  519099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt with IP's: []
	I1101 09:29:16.424239  519099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt ...
	I1101 09:29:16.424274  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: {Name:mkdcf555ffcc3ed403b4a9f8892c8fa924b9892d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.424493  519099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.key ...
	I1101 09:29:16.424508  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.key: {Name:mkff22c4720f09e526d72c814eb218b4abb731ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.424632  519099 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.key.11812e3f
	I1101 09:29:16.424663  519099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.crt.11812e3f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 09:29:16.725721  519099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.crt.11812e3f ...
	I1101 09:29:16.725753  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.crt.11812e3f: {Name:mka825e1368231832c84fcee1436857ed56519b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.725972  519099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.key.11812e3f ...
	I1101 09:29:16.726004  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.key.11812e3f: {Name:mk629b67aa2bc951d6bb8303aab04a470139f8ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.726129  519099 certs.go:382] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.crt.11812e3f -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.crt
	I1101 09:29:16.726217  519099 certs.go:386] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.key.11812e3f -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.key
	I1101 09:29:16.726266  519099 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.key
	I1101 09:29:16.726285  519099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.crt with IP's: []
	I1101 09:29:16.757345  519099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.crt ...
	I1101 09:29:16.757379  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.crt: {Name:mk68b51a8f1a074cfa06b541e0d862f35b908512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.757571  519099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.key ...
	I1101 09:29:16.757593  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.key: {Name:mk4461931e886dc045e8553c747238bb971866ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.757804  519099 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:29:16.757861  519099 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:29:16.757896  519099 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:29:16.757926  519099 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 09:29:16.758598  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:29:16.777877  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:29:16.796429  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:29:16.814138  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 09:29:16.831943  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:29:16.851436  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:29:16.869568  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:29:16.887792  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:29:16.905748  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:29:16.925744  519099 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:29:16.939270  519099 ssh_runner.go:195] Run: openssl version
	I1101 09:29:16.946032  519099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:29:16.957676  519099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:16.961937  519099 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:16.962005  519099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:16.996474  519099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:29:17.006120  519099 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:29:17.010195  519099 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:29:17.010254  519099 kubeadm.go:401] StartCluster: {Name:addons-050432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-050432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:29:17.010348  519099 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:29:17.010433  519099 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:29:17.039120  519099 cri.go:89] found id: ""
	I1101 09:29:17.039194  519099 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:29:17.048265  519099 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:29:17.057132  519099 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:29:17.057184  519099 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:29:17.065342  519099 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:29:17.065360  519099 kubeadm.go:158] found existing configuration files:
	
	I1101 09:29:17.065405  519099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:29:17.074375  519099 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:29:17.074430  519099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:29:17.082703  519099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:29:17.091120  519099 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:29:17.091182  519099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:29:17.100265  519099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:29:17.108402  519099 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:29:17.108480  519099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:29:17.115991  519099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:29:17.123999  519099 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:29:17.124051  519099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:29:17.131677  519099 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:29:17.169293  519099 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:29:17.169380  519099 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:29:17.190366  519099 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:29:17.190450  519099 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:29:17.190499  519099 kubeadm.go:319] OS: Linux
	I1101 09:29:17.190552  519099 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:29:17.190606  519099 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:29:17.190667  519099 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:29:17.190726  519099 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:29:17.190782  519099 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:29:17.190863  519099 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:29:17.190922  519099 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:29:17.190964  519099 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:29:17.251190  519099 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:29:17.251348  519099 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:29:17.251499  519099 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:29:17.258625  519099 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:29:17.260770  519099 out.go:252]   - Generating certificates and keys ...
	I1101 09:29:17.260893  519099 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:29:17.260989  519099 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:29:17.519595  519099 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:29:17.771540  519099 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:29:18.025644  519099 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:29:18.155239  519099 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:29:18.359515  519099 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:29:18.359635  519099 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-050432 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:29:18.437196  519099 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:29:18.437314  519099 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-050432 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:29:18.583509  519099 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:29:18.976111  519099 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:29:19.401990  519099 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:29:19.402068  519099 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:29:19.790963  519099 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:29:20.089943  519099 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:29:20.117512  519099 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:29:20.182474  519099 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:29:20.610354  519099 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:29:20.611243  519099 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:29:20.615193  519099 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:29:20.618516  519099 out.go:252]   - Booting up control plane ...
	I1101 09:29:20.618612  519099 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:29:20.618681  519099 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:29:20.618743  519099 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:29:20.631922  519099 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:29:20.632071  519099 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:29:20.640210  519099 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:29:20.640368  519099 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:29:20.640445  519099 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:29:20.737187  519099 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:29:20.737306  519099 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:29:21.239430  519099 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.93145ms
	I1101 09:29:21.248073  519099 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:29:21.248229  519099 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 09:29:21.248413  519099 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:29:21.248536  519099 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:29:22.630422  519099 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.382592516s
	I1101 09:29:23.706940  519099 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.45927635s
	I1101 09:29:25.249094  519099 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001308533s
	I1101 09:29:25.260779  519099 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:29:25.270996  519099 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:29:25.279462  519099 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:29:25.279723  519099 kubeadm.go:319] [mark-control-plane] Marking the node addons-050432 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:29:25.287121  519099 kubeadm.go:319] [bootstrap-token] Using token: 8a9tj0.a4ts8ocmz09rc9ud
	I1101 09:29:25.288240  519099 out.go:252]   - Configuring RBAC rules ...
	I1101 09:29:25.288376  519099 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:29:25.291978  519099 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:29:25.297156  519099 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:29:25.299926  519099 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:29:25.302604  519099 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:29:25.306079  519099 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:29:25.655059  519099 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:29:26.072485  519099 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:29:26.655038  519099 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:29:26.655820  519099 kubeadm.go:319] 
	I1101 09:29:26.655917  519099 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:29:26.655927  519099 kubeadm.go:319] 
	I1101 09:29:26.656018  519099 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:29:26.656028  519099 kubeadm.go:319] 
	I1101 09:29:26.656095  519099 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:29:26.656207  519099 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:29:26.656278  519099 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:29:26.656288  519099 kubeadm.go:319] 
	I1101 09:29:26.656349  519099 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:29:26.656359  519099 kubeadm.go:319] 
	I1101 09:29:26.656424  519099 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:29:26.656431  519099 kubeadm.go:319] 
	I1101 09:29:26.656481  519099 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:29:26.656560  519099 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:29:26.656658  519099 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:29:26.656670  519099 kubeadm.go:319] 
	I1101 09:29:26.656781  519099 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:29:26.656930  519099 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:29:26.656955  519099 kubeadm.go:319] 
	I1101 09:29:26.657101  519099 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8a9tj0.a4ts8ocmz09rc9ud \
	I1101 09:29:26.657246  519099 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 \
	I1101 09:29:26.657281  519099 kubeadm.go:319] 	--control-plane 
	I1101 09:29:26.657290  519099 kubeadm.go:319] 
	I1101 09:29:26.657443  519099 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:29:26.657453  519099 kubeadm.go:319] 
	I1101 09:29:26.657540  519099 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8a9tj0.a4ts8ocmz09rc9ud \
	I1101 09:29:26.657658  519099 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 
	I1101 09:29:26.660028  519099 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:29:26.660136  519099 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:29:26.660162  519099 cni.go:84] Creating CNI manager for ""
	I1101 09:29:26.660174  519099 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:29:26.661429  519099 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:29:26.662298  519099 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:29:26.666687  519099 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:29:26.666706  519099 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:29:26.680444  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:29:26.889268  519099 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:29:26.889443  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:26.889532  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-050432 minikube.k8s.io/updated_at=2025_11_01T09_29_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=addons-050432 minikube.k8s.io/primary=true
	I1101 09:29:26.899276  519099 ops.go:34] apiserver oom_adj: -16
	I1101 09:29:26.983283  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:27.483563  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:27.983385  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:28.484389  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:28.983602  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:29.483449  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:29.983521  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:30.483734  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:30.983959  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:31.483432  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:31.983750  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:32.048687  519099 kubeadm.go:1114] duration metric: took 5.159300553s to wait for elevateKubeSystemPrivileges
	I1101 09:29:32.048725  519099 kubeadm.go:403] duration metric: took 15.038477426s to StartCluster
	I1101 09:29:32.048745  519099 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:32.048877  519099 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 09:29:32.049228  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:32.049431  519099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:29:32.049432  519099 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:29:32.049456  519099 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 09:29:32.049564  519099 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-050432"
	I1101 09:29:32.049576  519099 addons.go:70] Setting registry=true in profile "addons-050432"
	I1101 09:29:32.049595  519099 addons.go:239] Setting addon registry=true in "addons-050432"
	I1101 09:29:32.049604  519099 addons.go:70] Setting default-storageclass=true in profile "addons-050432"
	I1101 09:29:32.049623  519099 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-050432"
	I1101 09:29:32.049630  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.049638  519099 addons.go:70] Setting ingress=true in profile "addons-050432"
	I1101 09:29:32.049627  519099 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-050432"
	I1101 09:29:32.049655  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.049654  519099 addons.go:70] Setting registry-creds=true in profile "addons-050432"
	I1101 09:29:32.049664  519099 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:32.049675  519099 addons.go:70] Setting metrics-server=true in profile "addons-050432"
	I1101 09:29:32.049676  519099 addons.go:239] Setting addon registry-creds=true in "addons-050432"
	I1101 09:29:32.049679  519099 addons.go:70] Setting storage-provisioner=true in profile "addons-050432"
	I1101 09:29:32.049689  519099 addons.go:239] Setting addon metrics-server=true in "addons-050432"
	I1101 09:29:32.049690  519099 addons.go:239] Setting addon storage-provisioner=true in "addons-050432"
	I1101 09:29:32.049649  519099 addons.go:239] Setting addon ingress=true in "addons-050432"
	I1101 09:29:32.049714  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.049718  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.049724  519099 addons.go:70] Setting volumesnapshots=true in profile "addons-050432"
	I1101 09:29:32.049736  519099 addons.go:239] Setting addon volumesnapshots=true in "addons-050432"
	I1101 09:29:32.049742  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.049759  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.050069  519099 addons.go:70] Setting volcano=true in profile "addons-050432"
	I1101 09:29:32.050098  519099 addons.go:239] Setting addon volcano=true in "addons-050432"
	I1101 09:29:32.050124  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.050236  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.050246  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.050251  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.050251  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.050256  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.049588  519099 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-050432"
	I1101 09:29:32.050279  519099 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-050432"
	I1101 09:29:32.050302  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.050602  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.050716  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.050947  519099 addons.go:70] Setting cloud-spanner=true in profile "addons-050432"
	I1101 09:29:32.050969  519099 addons.go:239] Setting addon cloud-spanner=true in "addons-050432"
	I1101 09:29:32.051006  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.051456  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.049631  519099 addons.go:70] Setting gcp-auth=true in profile "addons-050432"
	I1101 09:29:32.051679  519099 mustload.go:66] Loading cluster: addons-050432
	I1101 09:29:32.049667  519099 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-050432"
	I1101 09:29:32.052353  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.052604  519099 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:32.052857  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.049565  519099 addons.go:70] Setting yakd=true in profile "addons-050432"
	I1101 09:29:32.053489  519099 addons.go:239] Setting addon yakd=true in "addons-050432"
	I1101 09:29:32.053557  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.053645  519099 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-050432"
	I1101 09:29:32.053713  519099 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-050432"
	I1101 09:29:32.053744  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.054219  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.054237  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.049656  519099 addons.go:70] Setting ingress-dns=true in profile "addons-050432"
	I1101 09:29:32.054654  519099 addons.go:239] Setting addon ingress-dns=true in "addons-050432"
	I1101 09:29:32.054694  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.049622  519099 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-050432"
	I1101 09:29:32.050258  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.057110  519099 out.go:179] * Verifying Kubernetes components...
	I1101 09:29:32.049665  519099 addons.go:70] Setting inspektor-gadget=true in profile "addons-050432"
	I1101 09:29:32.049714  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.057813  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.057962  519099 addons.go:239] Setting addon inspektor-gadget=true in "addons-050432"
	I1101 09:29:32.058007  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.058497  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.059230  519099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:29:32.067869  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.068392  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.094856  519099 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:29:32.096123  519099 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:29:32.096156  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:29:32.096240  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.102616  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 09:29:32.103882  519099 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 09:29:32.103907  519099 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 09:29:32.104031  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	W1101 09:29:32.104689  519099 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 09:29:32.125276  519099 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-050432"
	I1101 09:29:32.126081  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.127217  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.138019  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.141045  519099 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 09:29:32.143367  519099 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1101 09:29:32.144054  519099 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 09:29:32.144089  519099 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 09:29:32.144159  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.144373  519099 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 09:29:32.144478  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 09:29:32.145417  519099 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 09:29:32.147954  519099 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:29:32.147977  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 09:29:32.148017  519099 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 09:29:32.148035  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 09:29:32.148039  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.148095  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.148242  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 09:29:32.148367  519099 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 09:29:32.151666  519099 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 09:29:32.151716  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:29:32.152130  519099 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 09:29:32.152353  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:29:32.152463  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.153172  519099 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 09:29:32.153188  519099 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 09:29:32.153238  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.156183  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:29:32.157964  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 09:29:32.159099  519099 addons.go:239] Setting addon default-storageclass=true in "addons-050432"
	I1101 09:29:32.159404  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.160209  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.165476  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 09:29:32.165925  519099 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 09:29:32.165544  519099 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 09:29:32.168707  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 09:29:32.170247  519099 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 09:29:32.174742  519099 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:29:32.174769  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 09:29:32.174848  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.175301  519099 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:29:32.175465  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 09:29:32.175420  519099 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 09:29:32.176525  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 09:29:32.176548  519099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 09:29:32.176614  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.175367  519099 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:29:32.176906  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 09:29:32.176969  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.177033  519099 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:29:32.177127  519099 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 09:29:32.177162  519099 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 09:29:32.177174  519099 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 09:29:32.177238  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.178792  519099 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:29:32.178815  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 09:29:32.178876  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.184250  519099 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:29:32.184279  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 09:29:32.184354  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.202546  519099 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:29:32.204621  519099 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:29:32.205253  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.209067  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.214606  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.215058  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.218076  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.219692  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.225463  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.227902  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.229098  519099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:29:32.237722  519099 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 09:29:32.239076  519099 out.go:179]   - Using image docker.io/busybox:stable
	I1101 09:29:32.242118  519099 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:29:32.242146  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:29:32.242211  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.248302  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.256627  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.269365  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.272764  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.273343  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.282298  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.283923  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.293882  519099 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:29:32.294905  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.393544  519099 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 09:29:32.393646  519099 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 09:29:32.417458  519099 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 09:29:32.417494  519099 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 09:29:32.417704  519099 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 09:29:32.417729  519099 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 09:29:32.420970  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 09:29:32.422764  519099 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 09:29:32.422782  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 09:29:32.426324  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:29:32.428453  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:29:32.456659  519099 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:32.456689  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 09:29:32.460148  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:29:32.460402  519099 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 09:29:32.460423  519099 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 09:29:32.462021  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 09:29:32.462041  519099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 09:29:32.462261  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:29:32.465369  519099 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 09:29:32.465389  519099 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 09:29:32.474468  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:29:32.476146  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:29:32.478760  519099 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 09:29:32.478820  519099 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 09:29:32.478886  519099 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 09:29:32.478960  519099 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 09:29:32.490368  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:29:32.490563  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:32.499081  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:29:32.515053  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 09:29:32.515083  519099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 09:29:32.516090  519099 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 09:29:32.516109  519099 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 09:29:32.525976  519099 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:29:32.526070  519099 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 09:29:32.533551  519099 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:29:32.533592  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 09:29:32.536761  519099 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:29:32.536786  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 09:29:32.570125  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 09:29:32.570251  519099 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 09:29:32.592032  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:29:32.594347  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 09:29:32.594374  519099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 09:29:32.596531  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:29:32.625338  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:29:32.668310  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 09:29:32.668341  519099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 09:29:32.678381  519099 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:29:32.678470  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 09:29:32.715205  519099 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 09:29:32.718708  519099 node_ready.go:35] waiting up to 6m0s for node "addons-050432" to be "Ready" ...
	I1101 09:29:32.723892  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 09:29:32.723970  519099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 09:29:32.742995  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:29:32.807577  519099 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 09:29:32.807687  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 09:29:32.898404  519099 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 09:29:32.898532  519099 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 09:29:32.958041  519099 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 09:29:32.958139  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 09:29:33.009343  519099 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 09:29:33.009375  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 09:29:33.081283  519099 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:29:33.081316  519099 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 09:29:33.123133  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:29:33.223673  519099 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-050432" context rescaled to 1 replicas
	I1101 09:29:33.476961  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.048463541s)
	I1101 09:29:33.714997  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.254808073s)
	I1101 09:29:33.715048  519099 addons.go:480] Verifying addon ingress=true in "addons-050432"
	I1101 09:29:33.715283  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.25299315s)
	I1101 09:29:33.715393  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.240887661s)
	I1101 09:29:33.715495  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.239315549s)
	I1101 09:29:33.715530  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.225133924s)
	I1101 09:29:33.715681  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.225096247s)
	W1101 09:29:33.715724  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:33.715759  519099 retry.go:31] will retry after 288.971351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:33.715728  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.21662131s)
	I1101 09:29:33.715799  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.123733521s)
	I1101 09:29:33.715824  519099 addons.go:480] Verifying addon registry=true in "addons-050432"
	I1101 09:29:33.715968  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.119406941s)
	I1101 09:29:33.715990  519099 addons.go:480] Verifying addon metrics-server=true in "addons-050432"
	I1101 09:29:33.716049  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.090674403s)
	I1101 09:29:33.717244  519099 out.go:179] * Verifying registry addon...
	I1101 09:29:33.717245  519099 out.go:179] * Verifying ingress addon...
	I1101 09:29:33.717244  519099 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-050432 service yakd-dashboard -n yakd-dashboard
	
	I1101 09:29:33.719128  519099 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 09:29:33.719480  519099 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 09:29:33.721700  519099 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:29:33.721730  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:33.723699  519099 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1101 09:29:33.746772  519099 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 09:29:33.746801  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:34.005350  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:34.184192  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.441074057s)
	W1101 09:29:34.184291  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:29:34.184329  519099 retry.go:31] will retry after 128.87656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:29:34.184610  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.061415264s)
	I1101 09:29:34.184634  519099 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-050432"
	I1101 09:29:34.187238  519099 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 09:29:34.189289  519099 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 09:29:34.193303  519099 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:29:34.193335  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:34.223156  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:34.223359  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:34.313497  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1101 09:29:34.633097  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:34.633126  519099 retry.go:31] will retry after 190.255939ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:34.693549  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:34.722579  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:34.722608  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:29:34.722666  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:34.823747  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:35.194104  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:35.222109  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:35.222138  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:35.693436  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:35.722410  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:35.722456  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:36.193022  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:36.221930  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:36.222251  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:36.692487  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:36.722815  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:36.722936  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:36.723182  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:36.823808  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.000014673s)
	W1101 09:29:36.823887  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:36.823905  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.510364751s)
	I1101 09:29:36.823917  519099 retry.go:31] will retry after 333.829323ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:37.158107  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:37.193531  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:37.222883  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:37.223028  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:37.693526  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:37.719962  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:37.719997  519099 retry.go:31] will retry after 545.128756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:37.722290  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:37.722509  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:38.192874  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:38.222591  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:38.222817  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:38.265916  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:38.693548  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:38.721847  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:38.722032  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:29:38.818309  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:38.818343  519099 retry.go:31] will retry after 1.706670957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:39.193189  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:39.221398  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:39.222108  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:39.222705  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:39.693954  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:39.722553  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:39.722810  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:39.760068  519099 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 09:29:39.760147  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:39.777599  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:39.885897  519099 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 09:29:39.900185  519099 addons.go:239] Setting addon gcp-auth=true in "addons-050432"
	I1101 09:29:39.900264  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:39.900626  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:39.918295  519099 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 09:29:39.918354  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:39.936168  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:40.036966  519099 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:29:40.038016  519099 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 09:29:40.038911  519099 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 09:29:40.038928  519099 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 09:29:40.052667  519099 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 09:29:40.052697  519099 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 09:29:40.066233  519099 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:29:40.066257  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 09:29:40.079778  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:29:40.193549  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:40.222354  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:40.222620  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:40.398822  519099 addons.go:480] Verifying addon gcp-auth=true in "addons-050432"
	I1101 09:29:40.399927  519099 out.go:179] * Verifying gcp-auth addon...
	I1101 09:29:40.401437  519099 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 09:29:40.405756  519099 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 09:29:40.405777  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:40.525977  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:40.692560  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:40.722424  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:40.722576  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:40.904606  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:29:41.084355  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:41.084390  519099 retry.go:31] will retry after 1.920037926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:41.192594  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:41.222524  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:41.222597  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:41.222728  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:41.404792  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:41.693227  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:41.721917  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:41.722093  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:41.905003  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:42.193251  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:42.222233  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:42.222287  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:42.405155  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:42.692299  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:42.722133  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:42.722355  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:42.906100  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:43.005223  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:43.193078  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:43.222158  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:43.222699  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:43.404759  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:29:43.569873  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:43.569910  519099 retry.go:31] will retry after 3.287494215s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:43.692870  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:43.721670  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:43.722390  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:43.722573  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:43.904322  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:44.192525  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:44.222399  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:44.222604  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:44.405587  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:44.692616  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:44.722538  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:44.722625  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:44.904487  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:45.193031  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:45.221779  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:45.222127  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:45.406065  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:45.693296  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:45.722144  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:45.722343  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:45.722406  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:45.904869  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:46.192986  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:46.221778  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:46.222725  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:46.404795  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:46.693352  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:46.722221  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:46.722438  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:46.858441  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:46.904668  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:47.193558  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:47.222660  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:47.222868  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:47.404822  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:29:47.415247  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:47.415279  519099 retry.go:31] will retry after 2.244820979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:47.692316  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:47.722024  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:47.722185  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:47.905241  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:48.192377  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:48.222281  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:48.222365  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:48.222545  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:48.404329  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:48.692497  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:48.722223  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:48.722411  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:48.904975  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:49.193290  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:49.223036  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:49.223281  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:49.405263  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:49.660781  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:49.693515  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:49.722411  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:49.722555  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:49.904061  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:50.192699  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:50.222249  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:50.222354  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:29:50.224044  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:50.224073  519099 retry.go:31] will retry after 7.889880289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:50.405281  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:50.692204  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:50.722085  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:50.722090  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:50.722325  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:50.904903  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:51.193637  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:51.222483  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:51.222592  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:51.404261  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:51.692100  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:51.721857  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:51.722355  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:51.905328  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:52.192285  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:52.221981  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:52.222047  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:52.405074  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:52.693090  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:52.721849  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:52.722715  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:52.904151  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:53.193170  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:53.222053  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:53.222204  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:53.222203  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:53.405076  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:53.693328  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:53.722252  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:53.722315  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:53.905049  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:54.193127  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:54.222141  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:54.222508  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:54.404822  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:54.692702  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:54.722553  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:54.722747  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:54.904425  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:55.192856  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:55.222414  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:55.222637  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:55.404964  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:55.693157  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:55.721814  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:55.721820  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:55.722311  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:55.905091  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:56.193198  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:56.221801  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:56.222477  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:56.404477  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:56.692517  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:56.722295  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:56.722489  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:56.905236  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:57.192580  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:57.222416  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:57.222432  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:57.405315  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:57.692028  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:57.721541  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:57.722251  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:57.904938  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:58.114241  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:58.192347  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:58.222169  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:58.222292  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:58.222342  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:58.405448  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:29:58.671703  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:58.671746  519099 retry.go:31] will retry after 6.232771096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:58.692695  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:58.722503  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:58.722663  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:58.904153  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:59.193406  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:59.222293  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:59.222369  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:59.405015  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:59.692924  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:59.722334  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:59.722445  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:59.904519  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:00.192401  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:00.222090  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:00.222266  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:00.405152  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:00.692322  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:00.724796  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:30:00.724958  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:30:00.726075  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:00.905231  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:01.192307  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:01.222450  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:01.222474  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:01.405391  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:01.692419  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:01.722274  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:01.722410  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:01.904318  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:02.192334  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:02.222249  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:02.222471  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:02.405339  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:02.692465  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:02.722382  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:02.722444  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:02.905521  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:03.192614  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:03.222647  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:03.222668  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:30:03.222712  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:30:03.404485  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:03.692370  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:03.722208  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:03.722304  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:03.905183  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:04.192141  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:04.222168  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:04.222322  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:04.405176  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:04.692801  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:04.722415  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:04.722488  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:04.904364  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:04.905394  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:05.192789  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:05.221428  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:05.222272  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:05.405554  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:30:05.468412  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:05.468447  519099 retry.go:31] will retry after 21.20891017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:05.692738  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:05.722614  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:30:05.722612  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:05.722774  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:05.904571  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:06.192975  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:06.221988  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:06.222318  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:06.405282  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:06.692237  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:06.722109  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:06.722307  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:06.905628  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:07.193432  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:07.222404  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:07.222602  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:07.404717  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:07.693019  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:07.722019  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:07.722510  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:07.904409  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:08.192274  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:08.222269  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:30:08.222308  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:30:08.222481  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:08.404672  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:08.692748  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:08.723000  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:08.723403  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:08.904933  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:09.193087  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:09.222031  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:09.222200  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:09.405046  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:09.693075  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:09.724213  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:09.724417  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:09.904795  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:10.192817  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:10.222592  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:10.222663  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:10.404279  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:10.692462  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:10.722322  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:30:10.722343  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:10.722514  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:10.905363  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:11.192222  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:11.221856  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:11.222051  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:11.404730  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:11.692656  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:11.722535  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:11.722627  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:11.905409  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:12.192624  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:12.222634  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:12.222737  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:12.404659  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:12.692821  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:12.722383  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:12.722401  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:12.904373  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:13.192256  519099 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:30:13.192280  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:13.221502  519099 node_ready.go:49] node "addons-050432" is "Ready"
	I1101 09:30:13.221539  519099 node_ready.go:38] duration metric: took 40.502796006s for node "addons-050432" to be "Ready" ...
	I1101 09:30:13.221559  519099 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:30:13.221626  519099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:30:13.221662  519099 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:30:13.221693  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:13.224814  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:13.241664  519099 api_server.go:72] duration metric: took 41.192130567s to wait for apiserver process to appear ...
	I1101 09:30:13.241695  519099 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:30:13.241721  519099 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 09:30:13.246417  519099 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 09:30:13.247599  519099 api_server.go:141] control plane version: v1.34.1
	I1101 09:30:13.247636  519099 api_server.go:131] duration metric: took 5.933584ms to wait for apiserver health ...
	I1101 09:30:13.247651  519099 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:30:13.251391  519099 system_pods.go:59] 20 kube-system pods found
	I1101 09:30:13.251431  519099 system_pods.go:61] "amd-gpu-device-plugin-xj8r5" [faddc6aa-a08b-49f8-a58f-73afc131c1a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:30:13.251439  519099 system_pods.go:61] "coredns-66bc5c9577-q9w79" [dd4bc6c1-d8f6-4217-a47d-5702facf5cef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:13.251449  519099 system_pods.go:61] "csi-hostpath-attacher-0" [c92d19b5-53dd-4790-951b-f17708691fc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:13.251453  519099 system_pods.go:61] "csi-hostpath-resizer-0" [904e1210-26cb-4f3a-9f9d-792aa271e4c3] Pending
	I1101 09:30:13.251459  519099 system_pods.go:61] "csi-hostpathplugin-kgt98" [1bccf77b-7d33-4ddb-a97f-ac28fb830b08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:13.251466  519099 system_pods.go:61] "etcd-addons-050432" [ad234ee4-8ed9-4e39-8e48-0b4f7fc10842] Running
	I1101 09:30:13.251472  519099 system_pods.go:61] "kindnet-thccv" [58dd6cee-ae6d-46fc-9aae-8e15b061163e] Running
	I1101 09:30:13.251476  519099 system_pods.go:61] "kube-apiserver-addons-050432" [eb1bdccb-bbc5-42cf-92ec-72fefdd17257] Running
	I1101 09:30:13.251485  519099 system_pods.go:61] "kube-controller-manager-addons-050432" [56900646-78db-46eb-ae95-a13ff716c639] Running
	I1101 09:30:13.251493  519099 system_pods.go:61] "kube-ingress-dns-minikube" [f749b80c-82af-4955-b7f5-0ad7e1764b81] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:13.251497  519099 system_pods.go:61] "kube-proxy-4zrl2" [32920d60-2c32-4373-a7e6-e9ac35143118] Running
	I1101 09:30:13.251500  519099 system_pods.go:61] "kube-scheduler-addons-050432" [1326c1dd-4381-404e-b859-53575b0cd6e0] Running
	I1101 09:30:13.251505  519099 system_pods.go:61] "metrics-server-85b7d694d7-qbbqn" [30ad2449-3241-420e-809f-47ee08c65a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:13.251514  519099 system_pods.go:61] "nvidia-device-plugin-daemonset-585vh" [a77cc1f1-85cb-4703-a429-f8b4eb535dfc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:13.251521  519099 system_pods.go:61] "registry-6b586f9694-tdrzt" [a03b3b38-efc6-4b4e-ab7b-ca924913d632] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:13.251526  519099 system_pods.go:61] "registry-creds-764b6fb674-8s95r" [933f9696-6269-4a4a-b066-9f938b019f9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:13.251533  519099 system_pods.go:61] "registry-proxy-ftdnb" [3e5edf9d-0dac-458d-b44e-7564cf6619c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:13.251538  519099 system_pods.go:61] "snapshot-controller-7d9fbc56b8-l826d" [7fb8f85e-051c-40c4-b4a5-2c5c851f3270] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:13.251545  519099 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tqzj5" [3644f7bf-33cf-4c24-8422-99f20e501ed9] Pending
	I1101 09:30:13.251550  519099 system_pods.go:61] "storage-provisioner" [873335ec-19d5-4ffd-a470-a5d15051fad9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:13.251558  519099 system_pods.go:74] duration metric: took 3.899579ms to wait for pod list to return data ...
	I1101 09:30:13.251567  519099 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:30:13.253888  519099 default_sa.go:45] found service account: "default"
	I1101 09:30:13.253917  519099 default_sa.go:55] duration metric: took 2.339005ms for default service account to be created ...
	I1101 09:30:13.253926  519099 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:30:13.258503  519099 system_pods.go:86] 20 kube-system pods found
	I1101 09:30:13.258539  519099 system_pods.go:89] "amd-gpu-device-plugin-xj8r5" [faddc6aa-a08b-49f8-a58f-73afc131c1a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:30:13.258547  519099 system_pods.go:89] "coredns-66bc5c9577-q9w79" [dd4bc6c1-d8f6-4217-a47d-5702facf5cef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:13.258554  519099 system_pods.go:89] "csi-hostpath-attacher-0" [c92d19b5-53dd-4790-951b-f17708691fc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:13.258558  519099 system_pods.go:89] "csi-hostpath-resizer-0" [904e1210-26cb-4f3a-9f9d-792aa271e4c3] Pending
	I1101 09:30:13.258563  519099 system_pods.go:89] "csi-hostpathplugin-kgt98" [1bccf77b-7d33-4ddb-a97f-ac28fb830b08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:13.258567  519099 system_pods.go:89] "etcd-addons-050432" [ad234ee4-8ed9-4e39-8e48-0b4f7fc10842] Running
	I1101 09:30:13.258571  519099 system_pods.go:89] "kindnet-thccv" [58dd6cee-ae6d-46fc-9aae-8e15b061163e] Running
	I1101 09:30:13.258575  519099 system_pods.go:89] "kube-apiserver-addons-050432" [eb1bdccb-bbc5-42cf-92ec-72fefdd17257] Running
	I1101 09:30:13.258578  519099 system_pods.go:89] "kube-controller-manager-addons-050432" [56900646-78db-46eb-ae95-a13ff716c639] Running
	I1101 09:30:13.258584  519099 system_pods.go:89] "kube-ingress-dns-minikube" [f749b80c-82af-4955-b7f5-0ad7e1764b81] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:13.258590  519099 system_pods.go:89] "kube-proxy-4zrl2" [32920d60-2c32-4373-a7e6-e9ac35143118] Running
	I1101 09:30:13.258594  519099 system_pods.go:89] "kube-scheduler-addons-050432" [1326c1dd-4381-404e-b859-53575b0cd6e0] Running
	I1101 09:30:13.258601  519099 system_pods.go:89] "metrics-server-85b7d694d7-qbbqn" [30ad2449-3241-420e-809f-47ee08c65a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:13.258607  519099 system_pods.go:89] "nvidia-device-plugin-daemonset-585vh" [a77cc1f1-85cb-4703-a429-f8b4eb535dfc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:13.258615  519099 system_pods.go:89] "registry-6b586f9694-tdrzt" [a03b3b38-efc6-4b4e-ab7b-ca924913d632] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:13.258619  519099 system_pods.go:89] "registry-creds-764b6fb674-8s95r" [933f9696-6269-4a4a-b066-9f938b019f9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:13.258632  519099 system_pods.go:89] "registry-proxy-ftdnb" [3e5edf9d-0dac-458d-b44e-7564cf6619c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:13.258641  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l826d" [7fb8f85e-051c-40c4-b4a5-2c5c851f3270] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:13.258650  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqzj5" [3644f7bf-33cf-4c24-8422-99f20e501ed9] Pending
	I1101 09:30:13.258657  519099 system_pods.go:89] "storage-provisioner" [873335ec-19d5-4ffd-a470-a5d15051fad9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:13.258681  519099 retry.go:31] will retry after 270.622651ms: missing components: kube-dns
	I1101 09:30:13.409195  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:13.536278  519099 system_pods.go:86] 20 kube-system pods found
	I1101 09:30:13.536324  519099 system_pods.go:89] "amd-gpu-device-plugin-xj8r5" [faddc6aa-a08b-49f8-a58f-73afc131c1a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:30:13.536334  519099 system_pods.go:89] "coredns-66bc5c9577-q9w79" [dd4bc6c1-d8f6-4217-a47d-5702facf5cef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:13.536346  519099 system_pods.go:89] "csi-hostpath-attacher-0" [c92d19b5-53dd-4790-951b-f17708691fc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:13.536353  519099 system_pods.go:89] "csi-hostpath-resizer-0" [904e1210-26cb-4f3a-9f9d-792aa271e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:13.536361  519099 system_pods.go:89] "csi-hostpathplugin-kgt98" [1bccf77b-7d33-4ddb-a97f-ac28fb830b08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:13.536366  519099 system_pods.go:89] "etcd-addons-050432" [ad234ee4-8ed9-4e39-8e48-0b4f7fc10842] Running
	I1101 09:30:13.536372  519099 system_pods.go:89] "kindnet-thccv" [58dd6cee-ae6d-46fc-9aae-8e15b061163e] Running
	I1101 09:30:13.536378  519099 system_pods.go:89] "kube-apiserver-addons-050432" [eb1bdccb-bbc5-42cf-92ec-72fefdd17257] Running
	I1101 09:30:13.536392  519099 system_pods.go:89] "kube-controller-manager-addons-050432" [56900646-78db-46eb-ae95-a13ff716c639] Running
	I1101 09:30:13.536401  519099 system_pods.go:89] "kube-ingress-dns-minikube" [f749b80c-82af-4955-b7f5-0ad7e1764b81] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:13.536406  519099 system_pods.go:89] "kube-proxy-4zrl2" [32920d60-2c32-4373-a7e6-e9ac35143118] Running
	I1101 09:30:13.536414  519099 system_pods.go:89] "kube-scheduler-addons-050432" [1326c1dd-4381-404e-b859-53575b0cd6e0] Running
	I1101 09:30:13.536421  519099 system_pods.go:89] "metrics-server-85b7d694d7-qbbqn" [30ad2449-3241-420e-809f-47ee08c65a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:13.536430  519099 system_pods.go:89] "nvidia-device-plugin-daemonset-585vh" [a77cc1f1-85cb-4703-a429-f8b4eb535dfc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:13.536437  519099 system_pods.go:89] "registry-6b586f9694-tdrzt" [a03b3b38-efc6-4b4e-ab7b-ca924913d632] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:13.536446  519099 system_pods.go:89] "registry-creds-764b6fb674-8s95r" [933f9696-6269-4a4a-b066-9f938b019f9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:13.536455  519099 system_pods.go:89] "registry-proxy-ftdnb" [3e5edf9d-0dac-458d-b44e-7564cf6619c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:13.536463  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l826d" [7fb8f85e-051c-40c4-b4a5-2c5c851f3270] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:13.536471  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqzj5" [3644f7bf-33cf-4c24-8422-99f20e501ed9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:13.536479  519099 system_pods.go:89] "storage-provisioner" [873335ec-19d5-4ffd-a470-a5d15051fad9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:13.536500  519099 retry.go:31] will retry after 380.716652ms: missing components: kube-dns
	I1101 09:30:13.693990  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:13.723011  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:13.723131  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:13.905437  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:13.922460  519099 system_pods.go:86] 20 kube-system pods found
	I1101 09:30:13.922503  519099 system_pods.go:89] "amd-gpu-device-plugin-xj8r5" [faddc6aa-a08b-49f8-a58f-73afc131c1a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:30:13.922515  519099 system_pods.go:89] "coredns-66bc5c9577-q9w79" [dd4bc6c1-d8f6-4217-a47d-5702facf5cef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:13.922526  519099 system_pods.go:89] "csi-hostpath-attacher-0" [c92d19b5-53dd-4790-951b-f17708691fc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:13.922537  519099 system_pods.go:89] "csi-hostpath-resizer-0" [904e1210-26cb-4f3a-9f9d-792aa271e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:13.922547  519099 system_pods.go:89] "csi-hostpathplugin-kgt98" [1bccf77b-7d33-4ddb-a97f-ac28fb830b08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:13.922554  519099 system_pods.go:89] "etcd-addons-050432" [ad234ee4-8ed9-4e39-8e48-0b4f7fc10842] Running
	I1101 09:30:13.922561  519099 system_pods.go:89] "kindnet-thccv" [58dd6cee-ae6d-46fc-9aae-8e15b061163e] Running
	I1101 09:30:13.922567  519099 system_pods.go:89] "kube-apiserver-addons-050432" [eb1bdccb-bbc5-42cf-92ec-72fefdd17257] Running
	I1101 09:30:13.922577  519099 system_pods.go:89] "kube-controller-manager-addons-050432" [56900646-78db-46eb-ae95-a13ff716c639] Running
	I1101 09:30:13.922587  519099 system_pods.go:89] "kube-ingress-dns-minikube" [f749b80c-82af-4955-b7f5-0ad7e1764b81] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:13.922596  519099 system_pods.go:89] "kube-proxy-4zrl2" [32920d60-2c32-4373-a7e6-e9ac35143118] Running
	I1101 09:30:13.922602  519099 system_pods.go:89] "kube-scheduler-addons-050432" [1326c1dd-4381-404e-b859-53575b0cd6e0] Running
	I1101 09:30:13.922614  519099 system_pods.go:89] "metrics-server-85b7d694d7-qbbqn" [30ad2449-3241-420e-809f-47ee08c65a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:13.922626  519099 system_pods.go:89] "nvidia-device-plugin-daemonset-585vh" [a77cc1f1-85cb-4703-a429-f8b4eb535dfc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:13.922637  519099 system_pods.go:89] "registry-6b586f9694-tdrzt" [a03b3b38-efc6-4b4e-ab7b-ca924913d632] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:13.922645  519099 system_pods.go:89] "registry-creds-764b6fb674-8s95r" [933f9696-6269-4a4a-b066-9f938b019f9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:13.922654  519099 system_pods.go:89] "registry-proxy-ftdnb" [3e5edf9d-0dac-458d-b44e-7564cf6619c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:13.922662  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l826d" [7fb8f85e-051c-40c4-b4a5-2c5c851f3270] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:13.922676  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqzj5" [3644f7bf-33cf-4c24-8422-99f20e501ed9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:13.922684  519099 system_pods.go:89] "storage-provisioner" [873335ec-19d5-4ffd-a470-a5d15051fad9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:13.922708  519099 retry.go:31] will retry after 293.8172ms: missing components: kube-dns
	I1101 09:30:14.193938  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:14.221890  519099 system_pods.go:86] 20 kube-system pods found
	I1101 09:30:14.221935  519099 system_pods.go:89] "amd-gpu-device-plugin-xj8r5" [faddc6aa-a08b-49f8-a58f-73afc131c1a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:30:14.221944  519099 system_pods.go:89] "coredns-66bc5c9577-q9w79" [dd4bc6c1-d8f6-4217-a47d-5702facf5cef] Running
	I1101 09:30:14.221957  519099 system_pods.go:89] "csi-hostpath-attacher-0" [c92d19b5-53dd-4790-951b-f17708691fc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:14.221968  519099 system_pods.go:89] "csi-hostpath-resizer-0" [904e1210-26cb-4f3a-9f9d-792aa271e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:14.221985  519099 system_pods.go:89] "csi-hostpathplugin-kgt98" [1bccf77b-7d33-4ddb-a97f-ac28fb830b08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:14.221992  519099 system_pods.go:89] "etcd-addons-050432" [ad234ee4-8ed9-4e39-8e48-0b4f7fc10842] Running
	I1101 09:30:14.221998  519099 system_pods.go:89] "kindnet-thccv" [58dd6cee-ae6d-46fc-9aae-8e15b061163e] Running
	I1101 09:30:14.222006  519099 system_pods.go:89] "kube-apiserver-addons-050432" [eb1bdccb-bbc5-42cf-92ec-72fefdd17257] Running
	I1101 09:30:14.222022  519099 system_pods.go:89] "kube-controller-manager-addons-050432" [56900646-78db-46eb-ae95-a13ff716c639] Running
	I1101 09:30:14.222032  519099 system_pods.go:89] "kube-ingress-dns-minikube" [f749b80c-82af-4955-b7f5-0ad7e1764b81] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:14.222037  519099 system_pods.go:89] "kube-proxy-4zrl2" [32920d60-2c32-4373-a7e6-e9ac35143118] Running
	I1101 09:30:14.222043  519099 system_pods.go:89] "kube-scheduler-addons-050432" [1326c1dd-4381-404e-b859-53575b0cd6e0] Running
	I1101 09:30:14.222051  519099 system_pods.go:89] "metrics-server-85b7d694d7-qbbqn" [30ad2449-3241-420e-809f-47ee08c65a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:14.222059  519099 system_pods.go:89] "nvidia-device-plugin-daemonset-585vh" [a77cc1f1-85cb-4703-a429-f8b4eb535dfc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:14.222069  519099 system_pods.go:89] "registry-6b586f9694-tdrzt" [a03b3b38-efc6-4b4e-ab7b-ca924913d632] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:14.222078  519099 system_pods.go:89] "registry-creds-764b6fb674-8s95r" [933f9696-6269-4a4a-b066-9f938b019f9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:14.222086  519099 system_pods.go:89] "registry-proxy-ftdnb" [3e5edf9d-0dac-458d-b44e-7564cf6619c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:14.222094  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l826d" [7fb8f85e-051c-40c4-b4a5-2c5c851f3270] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:14.222104  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqzj5" [3644f7bf-33cf-4c24-8422-99f20e501ed9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:14.222110  519099 system_pods.go:89] "storage-provisioner" [873335ec-19d5-4ffd-a470-a5d15051fad9] Running
	I1101 09:30:14.222121  519099 system_pods.go:126] duration metric: took 968.188399ms to wait for k8s-apps to be running ...
	I1101 09:30:14.222136  519099 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:30:14.222200  519099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:30:14.222821  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:14.222986  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:14.238578  519099 system_svc.go:56] duration metric: took 16.431621ms WaitForService to wait for kubelet
	I1101 09:30:14.238616  519099 kubeadm.go:587] duration metric: took 42.1890915s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:30:14.238646  519099 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:30:14.242002  519099 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:30:14.242046  519099 node_conditions.go:123] node cpu capacity is 8
	I1101 09:30:14.242071  519099 node_conditions.go:105] duration metric: took 3.417938ms to run NodePressure ...
	I1101 09:30:14.242088  519099 start.go:242] waiting for startup goroutines ...
	I1101 09:30:14.405393  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:14.693015  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:14.722965  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:14.722995  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:14.905324  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:15.193246  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:15.223242  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:15.223388  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:15.405742  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:15.693388  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:15.722490  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:15.722534  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:15.904501  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:16.195075  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:16.224906  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:16.225753  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:16.405403  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:16.693257  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:16.723924  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:16.725159  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:16.906198  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:17.194468  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:17.222982  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:17.223072  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:17.405903  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:17.694138  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:17.723199  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:17.723250  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:17.905607  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:18.193703  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:18.222882  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:18.223085  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:18.405302  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:18.693103  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:18.723374  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:18.723438  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:18.905783  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:19.193876  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:19.223355  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:19.223376  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:19.405485  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:19.694283  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:19.723526  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:19.723739  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:19.904654  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:20.193460  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:20.223493  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:20.223496  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:20.405287  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:20.692687  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:20.722633  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:20.722737  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:20.905080  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:21.194215  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:21.222951  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:21.223116  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:21.404934  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:21.694961  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:21.723272  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:21.723284  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:21.905336  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:22.194521  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:22.222762  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:22.222891  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:22.404903  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:22.694306  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:22.724979  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:22.725373  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:22.904727  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:23.193516  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:23.222865  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:23.222913  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:23.404766  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:23.712210  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:23.722993  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:23.723074  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:23.905641  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:24.193822  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:24.222515  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:24.222576  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:24.404445  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:24.692741  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:24.722868  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:24.722982  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:24.905132  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:25.194097  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:25.223351  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:25.223409  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:25.406652  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:25.694048  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:25.723701  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:25.723744  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:25.905623  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:26.193879  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:26.223341  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:26.223633  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:26.405820  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:26.677940  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:26.693486  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:26.722747  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:26.722826  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:26.904663  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:27.263748  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:27.263823  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:27.263918  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:27.371364  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:27.371402  519099 retry.go:31] will retry after 27.055947224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:27.405157  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:27.692857  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:27.722729  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:27.722941  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:27.904823  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:28.193429  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:28.222255  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:28.222888  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:28.404943  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:28.693200  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:28.723042  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:28.723086  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:28.904918  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:29.193678  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:29.222763  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:29.222901  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:29.405122  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:29.694635  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:29.722477  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:29.722673  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:29.905002  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:30.193550  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:30.222656  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:30.222672  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:30.404490  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:30.692741  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:30.722737  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:30.722747  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:30.905449  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:31.192578  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:31.222467  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:31.222508  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:31.405308  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:31.693048  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:31.722737  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:31.722954  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:31.905176  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:32.192727  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:32.222776  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:32.222878  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:32.404663  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:32.693013  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:32.722954  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:32.723024  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:32.905692  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:33.193448  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:33.222463  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:33.222648  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:33.405063  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:33.693980  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:33.722926  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:33.723067  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:33.907412  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:34.193684  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:34.223089  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:34.223324  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:34.405175  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:34.693944  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:34.722957  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:34.723032  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:34.905308  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:35.193188  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:35.222938  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:35.223128  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:35.405239  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:35.693371  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:35.723858  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:35.723858  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:35.905014  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:36.194046  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:36.295139  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:36.295315  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:36.405102  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:36.693996  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:36.723159  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:36.723169  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:36.904957  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:37.216752  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:37.223278  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:37.223364  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:37.405128  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:37.692718  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:37.722712  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:37.722828  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:37.905135  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:38.193771  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:38.222472  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:38.222531  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:38.404698  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:38.693144  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:38.722829  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:38.722990  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:38.904582  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:39.193096  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:39.223518  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:39.223955  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:39.405254  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:39.692958  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:39.723476  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:39.723787  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:39.907049  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:40.194121  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:40.224626  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:40.224722  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:40.405011  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:40.693600  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:40.722416  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:40.722431  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:40.905153  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:41.193351  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:41.222119  519099 kapi.go:107] duration metric: took 1m7.502983362s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 09:30:41.222874  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:41.404975  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:41.693530  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:41.724335  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:41.905486  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:42.192774  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:42.222531  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:42.404254  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:42.692673  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:42.722478  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:42.905313  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:43.221431  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:43.328592  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:43.568814  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:43.693467  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:43.794547  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:43.904547  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:44.192781  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:44.222584  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:44.404303  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:44.693159  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:44.723217  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:44.905797  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:45.195599  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:45.225569  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:45.406874  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:45.693064  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:45.722889  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:45.905366  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:46.193182  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:46.223061  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:46.405549  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:46.693315  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:46.723496  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:46.905942  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:47.194094  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:47.294390  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:47.405126  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:47.693879  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:47.722711  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:47.905089  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:48.193573  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:48.224029  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:48.405613  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:48.693327  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:48.723545  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:48.904768  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:49.193311  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:49.223827  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:49.405530  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:49.693748  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:49.722801  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:49.905325  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:50.192877  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:50.222431  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:50.405320  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:50.694403  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:50.723849  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:50.905570  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:51.193301  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:51.223378  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:51.405223  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:51.693011  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:51.723230  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:51.906175  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:52.192674  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:52.224022  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:52.404945  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:52.693268  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:52.723317  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:52.907312  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:53.192870  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:53.222827  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:53.404035  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:53.693489  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:53.723081  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:53.905714  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:54.193415  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:54.223586  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:54.404332  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:54.428419  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:54.693236  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:54.723366  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:54.905329  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:30:55.079647  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:55.079687  519099 retry.go:31] will retry after 27.58208303s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:55.193566  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:55.223874  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:55.405619  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:55.693458  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:55.723241  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:55.907344  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:56.195453  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:56.224200  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:56.406196  519099 kapi.go:107] duration metric: took 1m16.004752522s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 09:30:56.407774  519099 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-050432 cluster.
	I1101 09:30:56.409039  519099 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 09:30:56.410097  519099 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 09:30:56.693798  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:56.724024  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:57.193570  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:57.223815  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:57.693999  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:57.723203  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:58.194563  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:58.223613  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:58.693541  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:58.723828  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:59.193334  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:59.223185  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:59.692920  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:59.722532  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:00.193461  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:00.223382  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:00.693222  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:00.723727  519099 kapi.go:107] duration metric: took 1m27.004239809s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 09:31:01.193544  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:01.694525  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:02.193085  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:02.693606  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:03.192888  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:03.694202  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:04.193265  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:04.693910  519099 kapi.go:107] duration metric: took 1m30.504619765s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 09:31:22.662127  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 09:31:23.224180  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:31:23.224304  519099 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:31:23.227169  519099 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, registry-creds, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1101 09:31:23.228173  519099 addons.go:515] duration metric: took 1m51.178718833s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns registry-creds amd-gpu-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1101 09:31:23.228221  519099 start.go:247] waiting for cluster config update ...
	I1101 09:31:23.228247  519099 start.go:256] writing updated cluster config ...
	I1101 09:31:23.228560  519099 ssh_runner.go:195] Run: rm -f paused
	I1101 09:31:23.232675  519099 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:31:23.236685  519099 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q9w79" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.241184  519099 pod_ready.go:94] pod "coredns-66bc5c9577-q9w79" is "Ready"
	I1101 09:31:23.241208  519099 pod_ready.go:86] duration metric: took 4.495258ms for pod "coredns-66bc5c9577-q9w79" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.243307  519099 pod_ready.go:83] waiting for pod "etcd-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.247712  519099 pod_ready.go:94] pod "etcd-addons-050432" is "Ready"
	I1101 09:31:23.247734  519099 pod_ready.go:86] duration metric: took 4.401171ms for pod "etcd-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.249728  519099 pod_ready.go:83] waiting for pod "kube-apiserver-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.253667  519099 pod_ready.go:94] pod "kube-apiserver-addons-050432" is "Ready"
	I1101 09:31:23.253692  519099 pod_ready.go:86] duration metric: took 3.942504ms for pod "kube-apiserver-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.255730  519099 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.637293  519099 pod_ready.go:94] pod "kube-controller-manager-addons-050432" is "Ready"
	I1101 09:31:23.637324  519099 pod_ready.go:86] duration metric: took 381.571204ms for pod "kube-controller-manager-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.836962  519099 pod_ready.go:83] waiting for pod "kube-proxy-4zrl2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:24.236469  519099 pod_ready.go:94] pod "kube-proxy-4zrl2" is "Ready"
	I1101 09:31:24.236509  519099 pod_ready.go:86] duration metric: took 399.518195ms for pod "kube-proxy-4zrl2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:24.437477  519099 pod_ready.go:83] waiting for pod "kube-scheduler-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:24.836740  519099 pod_ready.go:94] pod "kube-scheduler-addons-050432" is "Ready"
	I1101 09:31:24.836768  519099 pod_ready.go:86] duration metric: took 399.265438ms for pod "kube-scheduler-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:24.836780  519099 pod_ready.go:40] duration metric: took 1.604071211s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:31:24.885155  519099 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:31:24.886464  519099 out.go:179] * Done! kubectl is now configured to use "addons-050432" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:32:31 addons-050432 crio[764]: time="2025-11-01T09:32:31.399229186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:32:31 addons-050432 crio[764]: time="2025-11-01T09:32:31.400101413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:32:31 addons-050432 crio[764]: time="2025-11-01T09:32:31.432007213Z" level=info msg="Created container b7a845d69511c5a7c84a9a6f4d1362b76e48a8878503a47305e7ec115b19c10a: kube-system/registry-creds-764b6fb674-8s95r/registry-creds" id=2a6a3af4-1a93-405f-93c8-2e780fc535b0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:32:31 addons-050432 crio[764]: time="2025-11-01T09:32:31.43260964Z" level=info msg="Starting container: b7a845d69511c5a7c84a9a6f4d1362b76e48a8878503a47305e7ec115b19c10a" id=2e6278c9-a261-4985-b4f1-e3b378841ddd name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:32:31 addons-050432 crio[764]: time="2025-11-01T09:32:31.434317321Z" level=info msg="Started container" PID=9005 containerID=b7a845d69511c5a7c84a9a6f4d1362b76e48a8878503a47305e7ec115b19c10a description=kube-system/registry-creds-764b6fb674-8s95r/registry-creds id=2e6278c9-a261-4985-b4f1-e3b378841ddd name=/runtime.v1.RuntimeService/StartContainer sandboxID=f418d85d69261f8102866ce96c3d2a50fe7eeb7f6d365e7f4320f7842f14d378
	Nov 01 09:32:31 addons-050432 crio[764]: time="2025-11-01T09:32:31.680489395Z" level=info msg="Removing container: 3e3bf24cea8363c3379f01e8957fb69a7398c7dcc5f548d081901fa4b83724c0" id=eee2da8d-e48f-4050-8e1a-c244cc24f798 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:32:31 addons-050432 crio[764]: time="2025-11-01T09:32:31.687893458Z" level=info msg="Removed container 3e3bf24cea8363c3379f01e8957fb69a7398c7dcc5f548d081901fa4b83724c0: default/task-pv-pod-restore/task-pv-container" id=eee2da8d-e48f-4050-8e1a-c244cc24f798 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:33:25 addons-050432 crio[764]: time="2025-11-01T09:33:25.972913907Z" level=info msg="Stopping pod sandbox: 666de4c57f3086fc83b285ce720dcbf0c2bcd4623b8bb5252f8903d7f28b6f3e" id=c0b8bc2f-1968-402b-bb15-5488a568e807 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:33:25 addons-050432 crio[764]: time="2025-11-01T09:33:25.972993515Z" level=info msg="Stopped pod sandbox (already stopped): 666de4c57f3086fc83b285ce720dcbf0c2bcd4623b8bb5252f8903d7f28b6f3e" id=c0b8bc2f-1968-402b-bb15-5488a568e807 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:33:25 addons-050432 crio[764]: time="2025-11-01T09:33:25.973297891Z" level=info msg="Removing pod sandbox: 666de4c57f3086fc83b285ce720dcbf0c2bcd4623b8bb5252f8903d7f28b6f3e" id=4be1f5df-6d61-44f7-88f5-0d5b796ba15d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:33:25 addons-050432 crio[764]: time="2025-11-01T09:33:25.976530365Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:33:25 addons-050432 crio[764]: time="2025-11-01T09:33:25.976593897Z" level=info msg="Removed pod sandbox: 666de4c57f3086fc83b285ce720dcbf0c2bcd4623b8bb5252f8903d7f28b6f3e" id=4be1f5df-6d61-44f7-88f5-0d5b796ba15d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:34:13 addons-050432 crio[764]: time="2025-11-01T09:34:13.200736655Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-d9g5z/POD" id=454d3e08-8c4c-4426-9f52-4f466dd15035 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:34:13 addons-050432 crio[764]: time="2025-11-01T09:34:13.200880548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:34:13 addons-050432 crio[764]: time="2025-11-01T09:34:13.209096211Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-d9g5z Namespace:default ID:9a9112b050591d56c335f1d83a6502c14345aa8ce8f4fd9b910d9434ab37fdfb UID:28f00ef5-83f9-47db-9b47-40bfcd5c3839 NetNS:/var/run/netns/2b44a5b5-3514-406b-805a-7a2680858479 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009da488}] Aliases:map[]}"
	Nov 01 09:34:13 addons-050432 crio[764]: time="2025-11-01T09:34:13.209138993Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-d9g5z to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:34:13 addons-050432 crio[764]: time="2025-11-01T09:34:13.220402131Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-d9g5z Namespace:default ID:9a9112b050591d56c335f1d83a6502c14345aa8ce8f4fd9b910d9434ab37fdfb UID:28f00ef5-83f9-47db-9b47-40bfcd5c3839 NetNS:/var/run/netns/2b44a5b5-3514-406b-805a-7a2680858479 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009da488}] Aliases:map[]}"
	Nov 01 09:34:13 addons-050432 crio[764]: time="2025-11-01T09:34:13.220537856Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-d9g5z for CNI network kindnet (type=ptp)"
	Nov 01 09:34:13 addons-050432 crio[764]: time="2025-11-01T09:34:13.221527539Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:34:13 addons-050432 crio[764]: time="2025-11-01T09:34:13.222465886Z" level=info msg="Ran pod sandbox 9a9112b050591d56c335f1d83a6502c14345aa8ce8f4fd9b910d9434ab37fdfb with infra container: default/hello-world-app-5d498dc89-d9g5z/POD" id=454d3e08-8c4c-4426-9f52-4f466dd15035 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:34:13 addons-050432 crio[764]: time="2025-11-01T09:34:13.223886943Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=41fced96-18af-49d1-909f-3bac28b0d0c9 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:13 addons-050432 crio[764]: time="2025-11-01T09:34:13.224056487Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=41fced96-18af-49d1-909f-3bac28b0d0c9 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:13 addons-050432 crio[764]: time="2025-11-01T09:34:13.224104291Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=41fced96-18af-49d1-909f-3bac28b0d0c9 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:34:13 addons-050432 crio[764]: time="2025-11-01T09:34:13.22494309Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=99d4cbcf-84fe-4b5c-9903-13ab8f4b6793 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:34:13 addons-050432 crio[764]: time="2025-11-01T09:34:13.244112267Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	b7a845d69511c       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   f418d85d69261       registry-creds-764b6fb674-8s95r             kube-system
	efbd668b58c0f       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago        Running             nginx                                    0                   c91bb67b9c6fc       nginx                                       default
	f9a609afe9466       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   b05a5d83dd35c       busybox                                     default
	0cd2226cd22ce       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago        Running             csi-snapshotter                          0                   6198459e9a1ae       csi-hostpathplugin-kgt98                    kube-system
	ebc6c01c90c2f       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago        Running             csi-provisioner                          0                   6198459e9a1ae       csi-hostpathplugin-kgt98                    kube-system
	81c14cf7ac31f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago        Running             liveness-probe                           0                   6198459e9a1ae       csi-hostpathplugin-kgt98                    kube-system
	ba4952c9861da       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   6198459e9a1ae       csi-hostpathplugin-kgt98                    kube-system
	156252f8ed3d9       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago        Running             controller                               0                   3ddb99ff3796f       ingress-nginx-controller-675c5ddd98-z8482   ingress-nginx
	5ef835cd52f21       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago        Running             gcp-auth                                 0                   6323bd8989773       gcp-auth-78565c9fb4-ll292                   gcp-auth
	706e94c8f54a5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago        Running             gadget                                   0                   f870604cf07c7       gadget-hcssg                                gadget
	f18ba15647b79       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   6198459e9a1ae       csi-hostpathplugin-kgt98                    kube-system
	9138926a4ebf5       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   a4fa77bbf1181       local-path-provisioner-648f6765c9-vwzcp     local-path-storage
	b24762f9cf57c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   d6e867b734bad       csi-hostpath-attacher-0                     kube-system
	43b485de84b03       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   7dcf20edbc88d       nvidia-device-plugin-daemonset-585vh        kube-system
	1b71e4eeb4433       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   f6955374c5ff4       snapshot-controller-7d9fbc56b8-tqzj5        kube-system
	c4071d2f7fecc       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   809aa23cc33d7       amd-gpu-device-plugin-xj8r5                 kube-system
	c19b6a74eec58       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   eb65c7dfa68e8       registry-proxy-ftdnb                        kube-system
	47018dafba328       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   6a70f7e130670       csi-hostpath-resizer-0                      kube-system
	39e74546adc34       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   6198459e9a1ae       csi-hostpathplugin-kgt98                    kube-system
	174b86c0a84a5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              patch                                    0                   7be699f21f136       ingress-nginx-admission-patch-8r4w5         ingress-nginx
	785e2c163a99a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              create                                   0                   ec1b0999ab9eb       ingress-nginx-admission-create-6l9tg        ingress-nginx
	c898b96b19d0d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   46613574c817a       snapshot-controller-7d9fbc56b8-l826d        kube-system
	c092281ab72f4       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               3 minutes ago        Running             cloud-spanner-emulator                   0                   264893e31d9ac       cloud-spanner-emulator-6f9fcf858b-j9ktg     default
	9dcaa1d9a58cf       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   b362663006f36       yakd-dashboard-5ff678cb9-gjr68              yakd-dashboard
	8649b5d2321a7       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   a8d92b4d08e6d       registry-6b586f9694-tdrzt                   kube-system
	36ab9635dbc1f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   9657d3f312df7       kube-ingress-dns-minikube                   kube-system
	b19635021e0f8       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   a02a3a03bb46c       metrics-server-85b7d694d7-qbbqn             kube-system
	d286de98535c9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago        Running             coredns                                  0                   1f679bfc3737b       coredns-66bc5c9577-q9w79                    kube-system
	8472deab524cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago        Running             storage-provisioner                      0                   5eb3eb7fcc297       storage-provisioner                         kube-system
	ac5196f7d4eef       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   527f88f17e8df       kindnet-thccv                               kube-system
	c71a5e39f6a59       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   0020d30cb9558       kube-proxy-4zrl2                            kube-system
	cb6c48350e965       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   a1b754859a9d7       kube-scheduler-addons-050432                kube-system
	381d7ec1c72ca       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   13b85f7258a47       etcd-addons-050432                          kube-system
	80a3924ff0d87       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   0cbc8ced4abd3       kube-controller-manager-addons-050432       kube-system
	aa9abb8571eaa       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   57decffa05a5d       kube-apiserver-addons-050432                kube-system
	
	
	==> coredns [d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4] <==
	[INFO] 10.244.0.21:59885 - 15957 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006534195s
	[INFO] 10.244.0.21:35569 - 28997 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004884907s
	[INFO] 10.244.0.21:57551 - 42138 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006837135s
	[INFO] 10.244.0.21:44659 - 59464 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003463597s
	[INFO] 10.244.0.21:47174 - 27851 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005677545s
	[INFO] 10.244.0.21:46493 - 20068 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000949664s
	[INFO] 10.244.0.21:42302 - 20657 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001308513s
	[INFO] 10.244.0.26:34941 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000361412s
	[INFO] 10.244.0.26:50503 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000181146s
	[INFO] 10.244.0.31:37883 - 51628 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000263287s
	[INFO] 10.244.0.31:47414 - 8270 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.00035098s
	[INFO] 10.244.0.31:46640 - 3460 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000126583s
	[INFO] 10.244.0.31:51083 - 55032 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000155585s
	[INFO] 10.244.0.31:45165 - 7848 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000083946s
	[INFO] 10.244.0.31:55912 - 44162 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000092496s
	[INFO] 10.244.0.31:43861 - 36458 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.002949685s
	[INFO] 10.244.0.31:60523 - 49091 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003037687s
	[INFO] 10.244.0.31:36192 - 49533 "AAAA IN accounts.google.com.europe-west4-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.00517477s
	[INFO] 10.244.0.31:47717 - 3254 "A IN accounts.google.com.europe-west4-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005366635s
	[INFO] 10.244.0.31:53652 - 6839 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005385005s
	[INFO] 10.244.0.31:46619 - 53387 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005843789s
	[INFO] 10.244.0.31:54860 - 18121 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004347047s
	[INFO] 10.244.0.31:44262 - 11357 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004651998s
	[INFO] 10.244.0.31:41114 - 35966 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001498359s
	[INFO] 10.244.0.31:42420 - 12491 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.00169964s
	
	
	==> describe nodes <==
	Name:               addons-050432
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-050432
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=addons-050432
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_29_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-050432
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-050432"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:29:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-050432
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:34:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:33:40 +0000   Sat, 01 Nov 2025 09:29:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:33:40 +0000   Sat, 01 Nov 2025 09:29:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:33:40 +0000   Sat, 01 Nov 2025 09:29:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:33:40 +0000   Sat, 01 Nov 2025 09:30:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-050432
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                bf32987f-5f0a-4a39-8f48-6b363304d873
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m49s
	  default                     cloud-spanner-emulator-6f9fcf858b-j9ktg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  default                     hello-world-app-5d498dc89-d9g5z              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-hcssg                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  gcp-auth                    gcp-auth-78565c9fb4-ll292                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-z8482    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m41s
	  kube-system                 amd-gpu-device-plugin-xj8r5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 coredns-66bc5c9577-q9w79                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m43s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 csi-hostpathplugin-kgt98                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 etcd-addons-050432                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m48s
	  kube-system                 kindnet-thccv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m43s
	  kube-system                 kube-apiserver-addons-050432                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-controller-manager-addons-050432        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-proxy-4zrl2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-scheduler-addons-050432                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 metrics-server-85b7d694d7-qbbqn              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m41s
	  kube-system                 nvidia-device-plugin-daemonset-585vh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 registry-6b586f9694-tdrzt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 registry-creds-764b6fb674-8s95r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 registry-proxy-ftdnb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-l826d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 snapshot-controller-7d9fbc56b8-tqzj5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  local-path-storage          local-path-provisioner-648f6765c9-vwzcp      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-gjr68               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m41s  kube-proxy       
	  Normal  Starting                 4m49s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m48s  kubelet          Node addons-050432 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m48s  kubelet          Node addons-050432 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s  kubelet          Node addons-050432 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m44s  node-controller  Node addons-050432 event: Registered Node addons-050432 in Controller
	  Normal  NodeReady                4m1s   kubelet          Node addons-050432 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd] <==
	{"level":"warn","ts":"2025-11-01T09:29:23.207558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:23.214471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:23.227958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:23.234382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:23.241481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:23.293327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:34.560338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:34.568012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:00.696693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:00.703196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:00.718244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:00.734708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55526","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:30:37.358150Z","caller":"traceutil/trace.go:172","msg":"trace[992132178] transaction","detail":"{read_only:false; response_revision:1060; number_of_response:1; }","duration":"122.557838ms","start":"2025-11-01T09:30:37.235565Z","end":"2025-11-01T09:30:37.358123Z","steps":["trace[992132178] 'process raft request'  (duration: 122.504158ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:30:37.358162Z","caller":"traceutil/trace.go:172","msg":"trace[1049890627] transaction","detail":"{read_only:false; response_revision:1059; number_of_response:1; }","duration":"123.783863ms","start":"2025-11-01T09:30:37.234352Z","end":"2025-11-01T09:30:37.358136Z","steps":["trace[1049890627] 'process raft request'  (duration: 123.27207ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:30:43.326242Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.849826ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:30:43.326362Z","caller":"traceutil/trace.go:172","msg":"trace[1309233321] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"104.989508ms","start":"2025-11-01T09:30:43.221354Z","end":"2025-11-01T09:30:43.326343Z","steps":["trace[1309233321] 'agreement among raft nodes before linearized reading'  (duration: 54.782995ms)","trace[1309233321] 'range keys from in-memory index tree'  (duration: 50.029059ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:30:43.326359Z","caller":"traceutil/trace.go:172","msg":"trace[138011217] transaction","detail":"{read_only:false; response_revision:1106; number_of_response:1; }","duration":"144.66617ms","start":"2025-11-01T09:30:43.181682Z","end":"2025-11-01T09:30:43.326348Z","steps":["trace[138011217] 'process raft request'  (duration: 94.495371ms)","trace[138011217] 'compare'  (duration: 49.980206ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:30:43.326506Z","caller":"traceutil/trace.go:172","msg":"trace[915082310] transaction","detail":"{read_only:false; response_revision:1107; number_of_response:1; }","duration":"138.930428ms","start":"2025-11-01T09:30:43.187552Z","end":"2025-11-01T09:30:43.326482Z","steps":["trace[915082310] 'process raft request'  (duration: 138.738529ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:30:43.486253Z","caller":"traceutil/trace.go:172","msg":"trace[386791505] linearizableReadLoop","detail":"{readStateIndex:1141; appliedIndex:1141; }","duration":"125.441591ms","start":"2025-11-01T09:30:43.360777Z","end":"2025-11-01T09:30:43.486218Z","steps":["trace[386791505] 'read index received'  (duration: 125.432048ms)","trace[386791505] 'applied index is now lower than readState.Index'  (duration: 8.575µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:30:43.507734Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.932645ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:30:43.507821Z","caller":"traceutil/trace.go:172","msg":"trace[1045497463] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1107; }","duration":"147.029855ms","start":"2025-11-01T09:30:43.360772Z","end":"2025-11-01T09:30:43.507802Z","steps":["trace[1045497463] 'agreement among raft nodes before linearized reading'  (duration: 125.5481ms)","trace[1045497463] 'range keys from in-memory index tree'  (duration: 21.355985ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:30:43.507914Z","caller":"traceutil/trace.go:172","msg":"trace[2105378846] transaction","detail":"{read_only:false; response_revision:1108; number_of_response:1; }","duration":"177.304009ms","start":"2025-11-01T09:30:43.330597Z","end":"2025-11-01T09:30:43.507901Z","steps":["trace[2105378846] 'process raft request'  (duration: 155.654302ms)","trace[2105378846] 'compare'  (duration: 21.500748ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:30:43.566910Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.897318ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:30:43.566982Z","caller":"traceutil/trace.go:172","msg":"trace[127506552] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1108; }","duration":"162.986628ms","start":"2025-11-01T09:30:43.403982Z","end":"2025-11-01T09:30:43.566969Z","steps":["trace[127506552] 'agreement among raft nodes before linearized reading'  (duration: 162.818491ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:30:43.567098Z","caller":"traceutil/trace.go:172","msg":"trace[308816269] transaction","detail":"{read_only:false; response_revision:1109; number_of_response:1; }","duration":"234.286956ms","start":"2025-11-01T09:30:43.332794Z","end":"2025-11-01T09:30:43.567081Z","steps":["trace[308816269] 'process raft request'  (duration: 234.177285ms)"],"step_count":1}
	
	
	==> gcp-auth [5ef835cd52f219e26ae9cd94a356a41e4d8a412a0cc14ffe5f5e2e93827e82b5] <==
	2025/11/01 09:30:55 GCP Auth Webhook started!
	2025/11/01 09:31:25 Ready to marshal response ...
	2025/11/01 09:31:25 Ready to write response ...
	2025/11/01 09:31:25 Ready to marshal response ...
	2025/11/01 09:31:25 Ready to write response ...
	2025/11/01 09:31:25 Ready to marshal response ...
	2025/11/01 09:31:25 Ready to write response ...
	2025/11/01 09:31:35 Ready to marshal response ...
	2025/11/01 09:31:35 Ready to write response ...
	2025/11/01 09:31:35 Ready to marshal response ...
	2025/11/01 09:31:35 Ready to write response ...
	2025/11/01 09:31:45 Ready to marshal response ...
	2025/11/01 09:31:45 Ready to write response ...
	2025/11/01 09:31:45 Ready to marshal response ...
	2025/11/01 09:31:45 Ready to write response ...
	2025/11/01 09:31:48 Ready to marshal response ...
	2025/11/01 09:31:48 Ready to write response ...
	2025/11/01 09:31:56 Ready to marshal response ...
	2025/11/01 09:31:56 Ready to write response ...
	2025/11/01 09:32:23 Ready to marshal response ...
	2025/11/01 09:32:23 Ready to write response ...
	2025/11/01 09:34:12 Ready to marshal response ...
	2025/11/01 09:34:12 Ready to write response ...
	
	
	==> kernel <==
	 09:34:14 up  2:16,  0 user,  load average: 0.36, 0.98, 13.35
	Linux addons-050432 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120] <==
	I1101 09:32:12.884021       1 main.go:301] handling current node
	I1101 09:32:22.884319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:32:22.884360       1 main.go:301] handling current node
	I1101 09:32:32.884028       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:32:32.884063       1 main.go:301] handling current node
	I1101 09:32:42.884095       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:32:42.884167       1 main.go:301] handling current node
	I1101 09:32:52.883757       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:32:52.883804       1 main.go:301] handling current node
	I1101 09:33:02.884186       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:33:02.884256       1 main.go:301] handling current node
	I1101 09:33:12.883942       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:33:12.884004       1 main.go:301] handling current node
	I1101 09:33:22.884221       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:33:22.884258       1 main.go:301] handling current node
	I1101 09:33:32.884452       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:33:32.884479       1 main.go:301] handling current node
	I1101 09:33:42.884184       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:33:42.884222       1 main.go:301] handling current node
	I1101 09:33:52.887242       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:33:52.887291       1 main.go:301] handling current node
	I1101 09:34:02.883486       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:34:02.883526       1 main.go:301] handling current node
	I1101 09:34:12.883910       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:34:12.883951       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b] <==
	W1101 09:30:00.727159       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 09:30:13.074688       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.213.38:443: connect: connection refused
	W1101 09:30:13.074739       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.213.38:443: connect: connection refused
	E1101 09:30:13.074738       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.213.38:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:13.074769       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.213.38:443: connect: connection refused" logger="UnhandledError"
	W1101 09:30:13.096425       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.213.38:443: connect: connection refused
	E1101 09:30:13.096474       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.213.38:443: connect: connection refused" logger="UnhandledError"
	W1101 09:30:13.101705       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.213.38:443: connect: connection refused
	E1101 09:30:13.101816       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.213.38:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:16.074479       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.214.173:443: connect: connection refused" logger="UnhandledError"
	W1101 09:30:16.074509       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 09:30:16.074581       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 09:30:16.074821       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.214.173:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:16.080816       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.214.173:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:16.101567       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.214.173:443: connect: connection refused" logger="UnhandledError"
	I1101 09:30:16.176726       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 09:31:34.585216       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40886: use of closed network connection
	E1101 09:31:34.742768       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40910: use of closed network connection
	I1101 09:31:48.686235       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 09:31:48.894108       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.113.196"}
	I1101 09:32:07.318237       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1101 09:34:12.958552       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.147.93"}
	
	
	==> kube-controller-manager [80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853] <==
	I1101 09:29:30.678641       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:29:30.680003       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:29:30.680040       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:29:30.680096       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:29:30.680119       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:29:30.680180       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:29:30.680247       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:29:30.680281       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:29:30.680456       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:29:30.680896       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:29:30.680902       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:29:30.682087       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:29:30.684098       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:29:30.685203       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:29:30.686368       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:29:30.700874       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 09:29:33.403278       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1101 09:30:00.689275       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:30:00.689510       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 09:30:00.689587       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 09:30:00.708568       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 09:30:00.712576       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 09:30:00.790248       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:30:00.812704       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:30:15.634427       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90] <==
	I1101 09:29:32.441575       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:29:32.693991       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:29:32.799217       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:29:32.799259       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:29:32.799380       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:29:33.100213       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:29:33.100313       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:29:33.142254       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:29:33.142815       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:29:33.142979       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:29:33.145119       1 config.go:200] "Starting service config controller"
	I1101 09:29:33.145176       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:29:33.145234       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:29:33.145257       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:29:33.145323       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:29:33.145350       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:29:33.147655       1 config.go:309] "Starting node config controller"
	I1101 09:29:33.147712       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:29:33.256872       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:29:33.257062       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:29:33.257079       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:29:33.257112       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043] <==
	E1101 09:29:23.704706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:29:23.704761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:29:23.704767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:29:23.704814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:29:23.704820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:29:23.704870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:29:23.704903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:29:23.704928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:29:23.704905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:29:23.704978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:29:23.705012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:29:23.705058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:29:23.705063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:29:23.704564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:29:23.705116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:29:24.560210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:29:24.609948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:29:24.660255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:29:24.703974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:29:24.708169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:29:24.816712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:29:24.929254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:29:24.943432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:29:24.950572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1101 09:29:25.302264       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:32:25 addons-050432 kubelet[1284]: I1101 09:32:25.659955    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=1.351310476 podStartE2EDuration="2.659935919s" podCreationTimestamp="2025-11-01 09:32:23 +0000 UTC" firstStartedPulling="2025-11-01 09:32:23.392010354 +0000 UTC m=+177.577137244" lastFinishedPulling="2025-11-01 09:32:24.700635797 +0000 UTC m=+178.885762687" observedRunningTime="2025-11-01 09:32:25.659417786 +0000 UTC m=+179.844544698" watchObservedRunningTime="2025-11-01 09:32:25.659935919 +0000 UTC m=+179.845062829"
	Nov 01 09:32:25 addons-050432 kubelet[1284]: I1101 09:32:25.926225    1284 scope.go:117] "RemoveContainer" containerID="deab6a20d382c14ef78f5691a44ac28f8ed16d4f197d5dcc617ed5372fe94f81"
	Nov 01 09:32:25 addons-050432 kubelet[1284]: I1101 09:32:25.935161    1284 scope.go:117] "RemoveContainer" containerID="a6e5bf05eee4fa736483eef836c2e137f59dfa5fa4345e9e8eb886d013b5429c"
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.437788    1284 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/33b8d91d-598c-44ad-a55e-357e85fe17d1-gcp-creds\") pod \"33b8d91d-598c-44ad-a55e-357e85fe17d1\" (UID: \"33b8d91d-598c-44ad-a55e-357e85fe17d1\") "
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.437868    1284 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbmbf\" (UniqueName: \"kubernetes.io/projected/33b8d91d-598c-44ad-a55e-357e85fe17d1-kube-api-access-sbmbf\") pod \"33b8d91d-598c-44ad-a55e-357e85fe17d1\" (UID: \"33b8d91d-598c-44ad-a55e-357e85fe17d1\") "
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.437907    1284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33b8d91d-598c-44ad-a55e-357e85fe17d1-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "33b8d91d-598c-44ad-a55e-357e85fe17d1" (UID: "33b8d91d-598c-44ad-a55e-357e85fe17d1"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.438067    1284 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^ab1d6379-b705-11f0-b523-5a442b72f9c6\") pod \"33b8d91d-598c-44ad-a55e-357e85fe17d1\" (UID: \"33b8d91d-598c-44ad-a55e-357e85fe17d1\") "
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.438248    1284 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/33b8d91d-598c-44ad-a55e-357e85fe17d1-gcp-creds\") on node \"addons-050432\" DevicePath \"\""
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.440611    1284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b8d91d-598c-44ad-a55e-357e85fe17d1-kube-api-access-sbmbf" (OuterVolumeSpecName: "kube-api-access-sbmbf") pod "33b8d91d-598c-44ad-a55e-357e85fe17d1" (UID: "33b8d91d-598c-44ad-a55e-357e85fe17d1"). InnerVolumeSpecName "kube-api-access-sbmbf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.441566    1284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^ab1d6379-b705-11f0-b523-5a442b72f9c6" (OuterVolumeSpecName: "task-pv-storage") pod "33b8d91d-598c-44ad-a55e-357e85fe17d1" (UID: "33b8d91d-598c-44ad-a55e-357e85fe17d1"). InnerVolumeSpecName "pvc-49000138-9a25-424a-9b25-8fe6c0ba4f6a". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.538780    1284 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbmbf\" (UniqueName: \"kubernetes.io/projected/33b8d91d-598c-44ad-a55e-357e85fe17d1-kube-api-access-sbmbf\") on node \"addons-050432\" DevicePath \"\""
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.538896    1284 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-49000138-9a25-424a-9b25-8fe6c0ba4f6a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^ab1d6379-b705-11f0-b523-5a442b72f9c6\") on node \"addons-050432\" "
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.543339    1284 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-49000138-9a25-424a-9b25-8fe6c0ba4f6a" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^ab1d6379-b705-11f0-b523-5a442b72f9c6") on node "addons-050432"
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.639204    1284 reconciler_common.go:299] "Volume detached for volume \"pvc-49000138-9a25-424a-9b25-8fe6c0ba4f6a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^ab1d6379-b705-11f0-b523-5a442b72f9c6\") on node \"addons-050432\" DevicePath \"\""
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.679166    1284 scope.go:117] "RemoveContainer" containerID="3e3bf24cea8363c3379f01e8957fb69a7398c7dcc5f548d081901fa4b83724c0"
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.688211    1284 scope.go:117] "RemoveContainer" containerID="3e3bf24cea8363c3379f01e8957fb69a7398c7dcc5f548d081901fa4b83724c0"
	Nov 01 09:32:31 addons-050432 kubelet[1284]: E1101 09:32:31.688731    1284 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e3bf24cea8363c3379f01e8957fb69a7398c7dcc5f548d081901fa4b83724c0\": container with ID starting with 3e3bf24cea8363c3379f01e8957fb69a7398c7dcc5f548d081901fa4b83724c0 not found: ID does not exist" containerID="3e3bf24cea8363c3379f01e8957fb69a7398c7dcc5f548d081901fa4b83724c0"
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.688782    1284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e3bf24cea8363c3379f01e8957fb69a7398c7dcc5f548d081901fa4b83724c0"} err="failed to get container status \"3e3bf24cea8363c3379f01e8957fb69a7398c7dcc5f548d081901fa4b83724c0\": rpc error: code = NotFound desc = could not find container \"3e3bf24cea8363c3379f01e8957fb69a7398c7dcc5f548d081901fa4b83724c0\": container with ID starting with 3e3bf24cea8363c3379f01e8957fb69a7398c7dcc5f548d081901fa4b83724c0 not found: ID does not exist"
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.690001    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-8s95r" podStartSLOduration=177.257208047 podStartE2EDuration="2m59.689981696s" podCreationTimestamp="2025-11-01 09:29:32 +0000 UTC" firstStartedPulling="2025-11-01 09:32:28.923462415 +0000 UTC m=+183.108589321" lastFinishedPulling="2025-11-01 09:32:31.356236077 +0000 UTC m=+185.541362970" observedRunningTime="2025-11-01 09:32:31.689717241 +0000 UTC m=+185.874844151" watchObservedRunningTime="2025-11-01 09:32:31.689981696 +0000 UTC m=+185.875108590"
	Nov 01 09:32:31 addons-050432 kubelet[1284]: I1101 09:32:31.903734    1284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b8d91d-598c-44ad-a55e-357e85fe17d1" path="/var/lib/kubelet/pods/33b8d91d-598c-44ad-a55e-357e85fe17d1/volumes"
	Nov 01 09:33:01 addons-050432 kubelet[1284]: I1101 09:33:01.900648    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-ftdnb" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:33:05 addons-050432 kubelet[1284]: I1101 09:33:05.901667    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-585vh" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:33:12 addons-050432 kubelet[1284]: I1101 09:33:12.901070    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xj8r5" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:34:12 addons-050432 kubelet[1284]: I1101 09:34:12.948576    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbzv7\" (UniqueName: \"kubernetes.io/projected/28f00ef5-83f9-47db-9b47-40bfcd5c3839-kube-api-access-tbzv7\") pod \"hello-world-app-5d498dc89-d9g5z\" (UID: \"28f00ef5-83f9-47db-9b47-40bfcd5c3839\") " pod="default/hello-world-app-5d498dc89-d9g5z"
	Nov 01 09:34:12 addons-050432 kubelet[1284]: I1101 09:34:12.948705    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/28f00ef5-83f9-47db-9b47-40bfcd5c3839-gcp-creds\") pod \"hello-world-app-5d498dc89-d9g5z\" (UID: \"28f00ef5-83f9-47db-9b47-40bfcd5c3839\") " pod="default/hello-world-app-5d498dc89-d9g5z"
	
	
	==> storage-provisioner [8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7] <==
	W1101 09:33:50.492912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:52.495903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:52.499715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:54.503278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:54.507138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:56.510336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:56.515590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:58.518456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:33:58.522251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:00.525480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:00.530366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:02.533539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:02.538530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:04.542226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:04.546986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:06.549832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:06.554967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:08.558715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:08.563457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:10.566790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:10.571107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:12.574562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:12.579765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:14.582695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:34:14.587285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-050432 -n addons-050432
helpers_test.go:269: (dbg) Run:  kubectl --context addons-050432 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-6l9tg ingress-nginx-admission-patch-8r4w5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-050432 describe pod ingress-nginx-admission-create-6l9tg ingress-nginx-admission-patch-8r4w5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-050432 describe pod ingress-nginx-admission-create-6l9tg ingress-nginx-admission-patch-8r4w5: exit status 1 (61.332241ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6l9tg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8r4w5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-050432 describe pod ingress-nginx-admission-create-6l9tg ingress-nginx-admission-patch-8r4w5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (255.830007ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:34:15.603641  533915 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:34:15.603934  533915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:15.603943  533915 out.go:374] Setting ErrFile to fd 2...
	I1101 09:34:15.603947  533915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:15.604145  533915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:34:15.604408  533915 mustload.go:66] Loading cluster: addons-050432
	I1101 09:34:15.604806  533915 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:15.604824  533915 addons.go:607] checking whether the cluster is paused
	I1101 09:34:15.604916  533915 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:15.604934  533915 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:34:15.605314  533915 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:34:15.623038  533915 ssh_runner.go:195] Run: systemctl --version
	I1101 09:34:15.623093  533915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:34:15.640322  533915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:34:15.742134  533915 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:34:15.742217  533915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:34:15.773927  533915 cri.go:89] found id: "b7a845d69511c5a7c84a9a6f4d1362b76e48a8878503a47305e7ec115b19c10a"
	I1101 09:34:15.773959  533915 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:34:15.773963  533915 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:34:15.773966  533915 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:34:15.773969  533915 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:34:15.773973  533915 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:34:15.773975  533915 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:34:15.773978  533915 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:34:15.773980  533915 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:34:15.773989  533915 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:34:15.773993  533915 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:34:15.773998  533915 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:34:15.774001  533915 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:34:15.774005  533915 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:34:15.774008  533915 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:34:15.774023  533915 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:34:15.774033  533915 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:34:15.774039  533915 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:34:15.774043  533915 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:34:15.774047  533915 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:34:15.774050  533915 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:34:15.774052  533915 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:34:15.774055  533915 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:34:15.774057  533915 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:34:15.774060  533915 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:34:15.774062  533915 cri.go:89] found id: ""
	I1101 09:34:15.774128  533915 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:34:15.790043  533915 out.go:203] 
	W1101 09:34:15.791272  533915 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:34:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:34:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:34:15.791288  533915 out.go:285] * 
	* 
	W1101 09:34:15.794355  533915 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:34:15.795499  533915 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable ingress --alsologtostderr -v=1: exit status 11 (252.408684ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:34:15.858422  533975 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:34:15.858712  533975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:15.858724  533975 out.go:374] Setting ErrFile to fd 2...
	I1101 09:34:15.858728  533975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:34:15.858944  533975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:34:15.859228  533975 mustload.go:66] Loading cluster: addons-050432
	I1101 09:34:15.859549  533975 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:15.859562  533975 addons.go:607] checking whether the cluster is paused
	I1101 09:34:15.859634  533975 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:34:15.859649  533975 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:34:15.860023  533975 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:34:15.877738  533975 ssh_runner.go:195] Run: systemctl --version
	I1101 09:34:15.877805  533975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:34:15.895572  533975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:34:15.995904  533975 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:34:15.995987  533975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:34:16.027593  533975 cri.go:89] found id: "b7a845d69511c5a7c84a9a6f4d1362b76e48a8878503a47305e7ec115b19c10a"
	I1101 09:34:16.027616  533975 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:34:16.027620  533975 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:34:16.027623  533975 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:34:16.027625  533975 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:34:16.027628  533975 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:34:16.027631  533975 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:34:16.027633  533975 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:34:16.027635  533975 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:34:16.027641  533975 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:34:16.027643  533975 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:34:16.027645  533975 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:34:16.027648  533975 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:34:16.027650  533975 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:34:16.027652  533975 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:34:16.027661  533975 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:34:16.027664  533975 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:34:16.027668  533975 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:34:16.027670  533975 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:34:16.027673  533975 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:34:16.027675  533975 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:34:16.027677  533975 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:34:16.027679  533975 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:34:16.027681  533975 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:34:16.027684  533975 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:34:16.027686  533975 cri.go:89] found id: ""
	I1101 09:34:16.027725  533975 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:34:16.042323  533975 out.go:203] 
	W1101 09:34:16.043519  533975 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:34:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:34:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:34:16.043537  533975 out.go:285] * 
	* 
	W1101 09:34:16.046584  533975 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:34:16.047779  533975 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-hcssg" [825c47a3-33e4-4cbc-9e08-5717bb11581a] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003544133s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (289.784239ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:45.356173  528202 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:45.356348  528202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:45.356361  528202 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:45.356367  528202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:45.356696  528202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:31:45.357066  528202 mustload.go:66] Loading cluster: addons-050432
	I1101 09:31:45.357561  528202 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:45.357584  528202 addons.go:607] checking whether the cluster is paused
	I1101 09:31:45.357723  528202 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:45.357749  528202 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:31:45.358358  528202 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:31:45.379332  528202 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:45.379404  528202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:31:45.400486  528202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:31:45.508280  528202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:45.508352  528202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:45.544550  528202 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:31:45.544576  528202 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:31:45.544582  528202 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:31:45.544587  528202 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:31:45.544591  528202 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:31:45.544596  528202 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:31:45.544600  528202 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:31:45.544604  528202 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:31:45.544609  528202 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:31:45.544618  528202 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:31:45.544622  528202 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:31:45.544627  528202 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:31:45.544632  528202 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:31:45.544637  528202 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:31:45.544642  528202 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:31:45.544663  528202 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:31:45.544671  528202 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:31:45.544677  528202 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:31:45.544681  528202 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:31:45.544684  528202 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:31:45.544698  528202 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:31:45.544706  528202 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:31:45.544710  528202 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:31:45.544716  528202 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:31:45.544720  528202 cri.go:89] found id: ""
	I1101 09:31:45.544777  528202 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:45.561196  528202 out.go:203] 
	W1101 09:31:45.562453  528202 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:45.562488  528202 out.go:285] * 
	* 
	W1101 09:31:45.566131  528202 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:45.567280  528202 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.762535ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-qbbqn" [30ad2449-3241-420e-809f-47ee08c65a39] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003724059s
addons_test.go:463: (dbg) Run:  kubectl --context addons-050432 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (280.778023ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:45.430854  528234 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:45.431251  528234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:45.431267  528234 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:45.431275  528234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:45.431590  528234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:31:45.431990  528234 mustload.go:66] Loading cluster: addons-050432
	I1101 09:31:45.432486  528234 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:45.432512  528234 addons.go:607] checking whether the cluster is paused
	I1101 09:31:45.432638  528234 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:45.432657  528234 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:31:45.433264  528234 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:31:45.456874  528234 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:45.456948  528234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:31:45.478969  528234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:31:45.584484  528234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:45.584572  528234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:45.616421  528234 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:31:45.616458  528234 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:31:45.616465  528234 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:31:45.616470  528234 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:31:45.616474  528234 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:31:45.616479  528234 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:31:45.616483  528234 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:31:45.616486  528234 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:31:45.616489  528234 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:31:45.616504  528234 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:31:45.616509  528234 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:31:45.616512  528234 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:31:45.616517  528234 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:31:45.616521  528234 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:31:45.616526  528234 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:31:45.616543  528234 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:31:45.616554  528234 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:31:45.616560  528234 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:31:45.616564  528234 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:31:45.616567  528234 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:31:45.616571  528234 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:31:45.616575  528234 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:31:45.616581  528234 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:31:45.616585  528234 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:31:45.616593  528234 cri.go:89] found id: ""
	I1101 09:31:45.616658  528234 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:45.631722  528234 out.go:203] 
	W1101 09:31:45.632809  528234 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:45.632864  528234 out.go:285] * 
	* 
	W1101 09:31:45.636200  528234 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:45.637462  528234 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 09:31:49.945484  517687 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 09:31:49.948908  517687 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 09:31:49.948943  517687 kapi.go:107] duration metric: took 3.502424ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.526633ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-050432 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-050432 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [ff1b3b3f-5033-410a-abf1-2843aa3bb5d9] Pending
helpers_test.go:352: "task-pv-pod" [ff1b3b3f-5033-410a-abf1-2843aa3bb5d9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [ff1b3b3f-5033-410a-abf1-2843aa3bb5d9] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003521802s
addons_test.go:572: (dbg) Run:  kubectl --context addons-050432 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-050432 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-050432 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-050432 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-050432 delete pod task-pv-pod: (1.216813232s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-050432 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-050432 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-050432 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [33b8d91d-598c-44ad-a55e-357e85fe17d1] Pending
helpers_test.go:352: "task-pv-pod-restore" [33b8d91d-598c-44ad-a55e-357e85fe17d1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [33b8d91d-598c-44ad-a55e-357e85fe17d1] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004673216s
addons_test.go:614: (dbg) Run:  kubectl --context addons-050432 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-050432 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-050432 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (254.772523ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:32:32.091491  531792 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:32:32.091817  531792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:32.091827  531792 out.go:374] Setting ErrFile to fd 2...
	I1101 09:32:32.091831  531792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:32.092069  531792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:32:32.092354  531792 mustload.go:66] Loading cluster: addons-050432
	I1101 09:32:32.092700  531792 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:32.092717  531792 addons.go:607] checking whether the cluster is paused
	I1101 09:32:32.092797  531792 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:32.092814  531792 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:32:32.093233  531792 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:32:32.111219  531792 ssh_runner.go:195] Run: systemctl --version
	I1101 09:32:32.111280  531792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:32:32.128883  531792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:32:32.230536  531792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:32:32.230664  531792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:32:32.263302  531792 cri.go:89] found id: "b7a845d69511c5a7c84a9a6f4d1362b76e48a8878503a47305e7ec115b19c10a"
	I1101 09:32:32.263335  531792 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:32:32.263339  531792 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:32:32.263343  531792 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:32:32.263345  531792 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:32:32.263349  531792 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:32:32.263352  531792 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:32:32.263354  531792 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:32:32.263357  531792 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:32:32.263371  531792 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:32:32.263373  531792 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:32:32.263376  531792 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:32:32.263378  531792 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:32:32.263381  531792 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:32:32.263384  531792 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:32:32.263395  531792 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:32:32.263403  531792 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:32:32.263408  531792 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:32:32.263414  531792 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:32:32.263417  531792 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:32:32.263419  531792 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:32:32.263422  531792 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:32:32.263424  531792 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:32:32.263427  531792 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:32:32.263430  531792 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:32:32.263432  531792 cri.go:89] found id: ""
	I1101 09:32:32.263489  531792 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:32:32.278342  531792 out.go:203] 
	W1101 09:32:32.279381  531792 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:32:32.279397  531792 out.go:285] * 
	* 
	W1101 09:32:32.282500  531792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:32:32.283766  531792 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (253.036684ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:32:32.349665  531854 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:32:32.350067  531854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:32.350078  531854 out.go:374] Setting ErrFile to fd 2...
	I1101 09:32:32.350082  531854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:32:32.350276  531854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:32:32.350566  531854 mustload.go:66] Loading cluster: addons-050432
	I1101 09:32:32.350933  531854 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:32.350951  531854 addons.go:607] checking whether the cluster is paused
	I1101 09:32:32.351034  531854 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:32:32.351053  531854 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:32:32.351442  531854 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:32:32.369097  531854 ssh_runner.go:195] Run: systemctl --version
	I1101 09:32:32.369166  531854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:32:32.387505  531854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:32:32.486995  531854 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:32:32.487094  531854 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:32:32.517555  531854 cri.go:89] found id: "b7a845d69511c5a7c84a9a6f4d1362b76e48a8878503a47305e7ec115b19c10a"
	I1101 09:32:32.517582  531854 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:32:32.517588  531854 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:32:32.517591  531854 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:32:32.517594  531854 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:32:32.517599  531854 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:32:32.517602  531854 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:32:32.517604  531854 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:32:32.517607  531854 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:32:32.517612  531854 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:32:32.517615  531854 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:32:32.517617  531854 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:32:32.517620  531854 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:32:32.517622  531854 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:32:32.517625  531854 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:32:32.517632  531854 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:32:32.517635  531854 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:32:32.517638  531854 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:32:32.517640  531854 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:32:32.517642  531854 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:32:32.517645  531854 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:32:32.517647  531854 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:32:32.517650  531854 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:32:32.517652  531854 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:32:32.517654  531854 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:32:32.517657  531854 cri.go:89] found id: ""
	I1101 09:32:32.517700  531854 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:32:32.532178  531854 out.go:203] 
	W1101 09:32:32.533087  531854 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:32:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:32:32.533105  531854 out.go:285] * 
	* 
	W1101 09:32:32.536110  531854 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:32:32.536997  531854 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (42.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-050432 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-050432 --alsologtostderr -v=1: exit status 11 (267.692471ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:45.702559  528464 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:45.702708  528464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:45.702717  528464 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:45.702721  528464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:45.702964  528464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:31:45.703266  528464 mustload.go:66] Loading cluster: addons-050432
	I1101 09:31:45.703600  528464 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:45.703618  528464 addons.go:607] checking whether the cluster is paused
	I1101 09:31:45.703706  528464 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:45.703723  528464 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:31:45.704098  528464 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:31:45.723345  528464 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:45.723428  528464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:31:45.741632  528464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:31:45.848601  528464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:45.848688  528464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:45.883947  528464 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:31:45.883973  528464 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:31:45.883978  528464 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:31:45.883981  528464 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:31:45.883984  528464 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:31:45.883987  528464 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:31:45.883989  528464 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:31:45.883992  528464 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:31:45.883994  528464 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:31:45.884000  528464 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:31:45.884002  528464 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:31:45.884005  528464 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:31:45.884008  528464 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:31:45.884020  528464 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:31:45.884024  528464 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:31:45.884033  528464 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:31:45.884041  528464 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:31:45.884048  528464 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:31:45.884055  528464 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:31:45.884059  528464 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:31:45.884071  528464 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:31:45.884077  528464 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:31:45.884081  528464 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:31:45.884088  528464 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:31:45.884090  528464 cri.go:89] found id: ""
	I1101 09:31:45.884135  528464 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:45.898331  528464 out.go:203] 
	W1101 09:31:45.899302  528464 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:45.899320  528464 out.go:285] * 
	* 
	W1101 09:31:45.902938  528464 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:45.904225  528464 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-050432 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-050432
helpers_test.go:243: (dbg) docker inspect addons-050432:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339",
	        "Created": "2025-11-01T09:29:12.30404353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519756,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:29:12.333366701Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339/hostname",
	        "HostsPath": "/var/lib/docker/containers/52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339/hosts",
	        "LogPath": "/var/lib/docker/containers/52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339/52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339-json.log",
	        "Name": "/addons-050432",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-050432:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-050432",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "52f6d966a5c3ac670a2793b2b7dacdbcc65ace870bb9dc7e2b26887a1fe85339",
	                "LowerDir": "/var/lib/docker/overlay2/002d67978d79bc0f2e4490bb5ec289013fa9e74d90b8eeb7652b0c6eddbb2c5b-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/002d67978d79bc0f2e4490bb5ec289013fa9e74d90b8eeb7652b0c6eddbb2c5b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/002d67978d79bc0f2e4490bb5ec289013fa9e74d90b8eeb7652b0c6eddbb2c5b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/002d67978d79bc0f2e4490bb5ec289013fa9e74d90b8eeb7652b0c6eddbb2c5b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-050432",
	                "Source": "/var/lib/docker/volumes/addons-050432/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-050432",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-050432",
	                "name.minikube.sigs.k8s.io": "addons-050432",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "65d70f781156ed99f9d651bb0f1904a09cf6efefa7f0f3f91a0b2cb1c535e1a9",
	            "SandboxKey": "/var/run/docker/netns/65d70f781156",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-050432": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:db:f1:a1:2a:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "689a180e30ba3609142ebebf73973e7d729fe8df59d4790f17d3a3d8905bbd97",
	                    "EndpointID": "f0d317fa5bcaf94257636a0dd65fefcc78aded5ebf19ba459bbb69652b69140d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-050432",
	                        "52f6d966a5c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-050432 -n addons-050432
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-050432 logs -n 25: (1.197720476s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-314542 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-314542   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ delete  │ -p download-only-314542                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-314542   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ start   │ -o=json --download-only -p download-only-762265 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-762265   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ delete  │ -p download-only-762265                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-762265   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ delete  │ -p download-only-314542                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-314542   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ delete  │ -p download-only-762265                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-762265   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ start   │ --download-only -p download-docker-887018 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-887018 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ -p download-docker-887018                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-887018 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ start   │ --download-only -p binary-mirror-679292 --alsologtostderr --binary-mirror http://127.0.0.1:45711 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-679292   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ -p binary-mirror-679292                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-679292   │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ addons  │ enable dashboard -p addons-050432                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-050432          │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-050432                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-050432          │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ start   │ -p addons-050432 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-050432          │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ addons-050432 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-050432          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-050432          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-050432          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-050432          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-050432          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-050432          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ ssh     │ addons-050432 ssh cat /opt/local-path-provisioner/pvc-611c46b8-835f-4e6f-b58e-711be421d3e5_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-050432          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ enable headlamp -p addons-050432 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-050432          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	│ addons  │ addons-050432 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-050432          │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:28:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:28:49.738334  519099 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:28:49.738433  519099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:49.738439  519099 out.go:374] Setting ErrFile to fd 2...
	I1101 09:28:49.738443  519099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:49.738626  519099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:28:49.739202  519099 out.go:368] Setting JSON to false
	I1101 09:28:49.740139  519099 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7867,"bootTime":1761981463,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:28:49.740240  519099 start.go:143] virtualization: kvm guest
	I1101 09:28:49.770072  519099 out.go:179] * [addons-050432] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:28:49.832323  519099 notify.go:221] Checking for updates...
	I1101 09:28:49.832368  519099 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 09:28:49.892871  519099 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:28:49.916065  519099 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 09:28:49.989071  519099 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 09:28:50.050975  519099 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:28:50.073294  519099 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:28:50.155507  519099 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:28:50.178194  519099 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:28:50.178295  519099 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:50.237658  519099 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-01 09:28:50.226509872 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:28:50.237781  519099 docker.go:319] overlay module found
	I1101 09:28:50.311483  519099 out.go:179] * Using the docker driver based on user configuration
	I1101 09:28:50.394643  519099 start.go:309] selected driver: docker
	I1101 09:28:50.394673  519099 start.go:930] validating driver "docker" against <nil>
	I1101 09:28:50.394722  519099 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:28:50.395403  519099 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:50.456471  519099 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-01 09:28:50.446986286 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:28:50.456653  519099 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:28:50.456884  519099 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:28:50.478719  519099 out.go:179] * Using Docker driver with root privileges
	I1101 09:28:50.519824  519099 cni.go:84] Creating CNI manager for ""
	I1101 09:28:50.519938  519099 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:28:50.519951  519099 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:28:50.520046  519099 start.go:353] cluster config:
	{Name:addons-050432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-050432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 09:28:50.527861  519099 out.go:179] * Starting "addons-050432" primary control-plane node in "addons-050432" cluster
	I1101 09:28:50.528922  519099 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:28:50.529959  519099 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:28:50.530859  519099 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:28:50.530900  519099 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:28:50.530910  519099 cache.go:59] Caching tarball of preloaded images
	I1101 09:28:50.530970  519099 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:28:50.531011  519099 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:28:50.531022  519099 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:28:50.531428  519099 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/config.json ...
	I1101 09:28:50.531452  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/config.json: {Name:mk13bc5aaa312233e0b39caae472a4ee7166ba6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:50.547884  519099 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:28:50.548002  519099 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:28:50.548019  519099 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 09:28:50.548025  519099 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 09:28:50.548033  519099 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 09:28:50.548040  519099 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 09:29:03.621680  519099 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 09:29:03.621728  519099 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:29:03.621774  519099 start.go:360] acquireMachinesLock for addons-050432: {Name:mk85ed1bbc2ce61443a1b4bdfd37e48e9bf1adde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:29:03.621920  519099 start.go:364] duration metric: took 118.99µs to acquireMachinesLock for "addons-050432"
	I1101 09:29:03.621959  519099 start.go:93] Provisioning new machine with config: &{Name:addons-050432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-050432 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:29:03.622071  519099 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:29:03.624186  519099 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 09:29:03.624452  519099 start.go:159] libmachine.API.Create for "addons-050432" (driver="docker")
	I1101 09:29:03.624495  519099 client.go:173] LocalClient.Create starting
	I1101 09:29:03.624598  519099 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem
	I1101 09:29:03.846918  519099 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem
	I1101 09:29:04.148716  519099 cli_runner.go:164] Run: docker network inspect addons-050432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:29:04.166906  519099 cli_runner.go:211] docker network inspect addons-050432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:29:04.167005  519099 network_create.go:284] running [docker network inspect addons-050432] to gather additional debugging logs...
	I1101 09:29:04.167029  519099 cli_runner.go:164] Run: docker network inspect addons-050432
	W1101 09:29:04.184106  519099 cli_runner.go:211] docker network inspect addons-050432 returned with exit code 1
	I1101 09:29:04.184144  519099 network_create.go:287] error running [docker network inspect addons-050432]: docker network inspect addons-050432: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-050432 not found
	I1101 09:29:04.184167  519099 network_create.go:289] output of [docker network inspect addons-050432]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-050432 not found
	
	** /stderr **
	I1101 09:29:04.184263  519099 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:29:04.201793  519099 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b606e0}
	I1101 09:29:04.201874  519099 network_create.go:124] attempt to create docker network addons-050432 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 09:29:04.201932  519099 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-050432 addons-050432
	I1101 09:29:04.263438  519099 network_create.go:108] docker network addons-050432 192.168.49.0/24 created
	I1101 09:29:04.263480  519099 kic.go:121] calculated static IP "192.168.49.2" for the "addons-050432" container
	I1101 09:29:04.263547  519099 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:29:04.281105  519099 cli_runner.go:164] Run: docker volume create addons-050432 --label name.minikube.sigs.k8s.io=addons-050432 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:29:04.301066  519099 oci.go:103] Successfully created a docker volume addons-050432
	I1101 09:29:04.301164  519099 cli_runner.go:164] Run: docker run --rm --name addons-050432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-050432 --entrypoint /usr/bin/test -v addons-050432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:29:07.874075  519099 cli_runner.go:217] Completed: docker run --rm --name addons-050432-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-050432 --entrypoint /usr/bin/test -v addons-050432:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (3.572628129s)
	I1101 09:29:07.874120  519099 oci.go:107] Successfully prepared a docker volume addons-050432
	I1101 09:29:07.874157  519099 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:29:07.874189  519099 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:29:07.874256  519099 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-050432:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:29:12.233594  519099 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-050432:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.359290404s)
	I1101 09:29:12.233631  519099 kic.go:203] duration metric: took 4.359438658s to extract preloaded images to volume ...
	W1101 09:29:12.233730  519099 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:29:12.233771  519099 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:29:12.233823  519099 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:29:12.288579  519099 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-050432 --name addons-050432 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-050432 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-050432 --network addons-050432 --ip 192.168.49.2 --volume addons-050432:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:29:12.549388  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Running}}
	I1101 09:29:12.568132  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:12.586104  519099 cli_runner.go:164] Run: docker exec addons-050432 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:29:12.631389  519099 oci.go:144] the created container "addons-050432" has a running status.
	I1101 09:29:12.631443  519099 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa...
	I1101 09:29:12.997200  519099 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:29:13.023690  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:13.041301  519099 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:29:13.041324  519099 kic_runner.go:114] Args: [docker exec --privileged addons-050432 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:29:13.086315  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:13.104632  519099 machine.go:94] provisionDockerMachine start ...
	I1101 09:29:13.104767  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:13.123188  519099 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:13.123512  519099 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1101 09:29:13.123530  519099 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:29:13.265332  519099 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-050432
	
	I1101 09:29:13.265367  519099 ubuntu.go:182] provisioning hostname "addons-050432"
	I1101 09:29:13.265457  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:13.283079  519099 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:13.283322  519099 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1101 09:29:13.283346  519099 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-050432 && echo "addons-050432" | sudo tee /etc/hostname
	I1101 09:29:13.435808  519099 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-050432
	
	I1101 09:29:13.435929  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:13.453396  519099 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:13.453653  519099 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1101 09:29:13.453678  519099 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-050432' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-050432/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-050432' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:29:13.594654  519099 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:29:13.594687  519099 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 09:29:13.594755  519099 ubuntu.go:190] setting up certificates
	I1101 09:29:13.594774  519099 provision.go:84] configureAuth start
	I1101 09:29:13.594855  519099 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-050432
	I1101 09:29:13.612518  519099 provision.go:143] copyHostCerts
	I1101 09:29:13.612600  519099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 09:29:13.612734  519099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 09:29:13.612833  519099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 09:29:13.612931  519099 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.addons-050432 san=[127.0.0.1 192.168.49.2 addons-050432 localhost minikube]
	I1101 09:29:13.785748  519099 provision.go:177] copyRemoteCerts
	I1101 09:29:13.785815  519099 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:29:13.785865  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:13.804072  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:13.905258  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:29:13.925154  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:29:13.942789  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:29:13.960753  519099 provision.go:87] duration metric: took 365.964817ms to configureAuth
	I1101 09:29:13.960782  519099 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:29:13.960986  519099 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:13.961117  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:13.979133  519099 main.go:143] libmachine: Using SSH client type: native
	I1101 09:29:13.979355  519099 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1101 09:29:13.979376  519099 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:29:14.231624  519099 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:29:14.231648  519099 machine.go:97] duration metric: took 1.126974312s to provisionDockerMachine
	I1101 09:29:14.231663  519099 client.go:176] duration metric: took 10.607158949s to LocalClient.Create
	I1101 09:29:14.231687  519099 start.go:167] duration metric: took 10.607235481s to libmachine.API.Create "addons-050432"
	I1101 09:29:14.231697  519099 start.go:293] postStartSetup for "addons-050432" (driver="docker")
	I1101 09:29:14.231713  519099 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:29:14.231783  519099 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:29:14.231852  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:14.249683  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:14.352128  519099 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:29:14.355964  519099 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:29:14.355998  519099 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:29:14.356011  519099 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 09:29:14.356083  519099 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 09:29:14.356116  519099 start.go:296] duration metric: took 124.412164ms for postStartSetup
	I1101 09:29:14.356534  519099 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-050432
	I1101 09:29:14.373461  519099 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/config.json ...
	I1101 09:29:14.373733  519099 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:29:14.373799  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:14.390766  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:14.489100  519099 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:29:14.493864  519099 start.go:128] duration metric: took 10.871749569s to createHost
	I1101 09:29:14.493892  519099 start.go:83] releasing machines lock for "addons-050432", held for 10.871953912s
	I1101 09:29:14.493967  519099 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-050432
	I1101 09:29:14.511350  519099 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:29:14.511392  519099 ssh_runner.go:195] Run: cat /version.json
	I1101 09:29:14.511451  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:14.511453  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:14.531542  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:14.531913  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:14.687667  519099 ssh_runner.go:195] Run: systemctl --version
	I1101 09:29:14.694982  519099 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:29:14.730130  519099 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:29:14.734887  519099 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:29:14.734959  519099 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:29:14.760618  519099 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:29:14.760646  519099 start.go:496] detecting cgroup driver to use...
	I1101 09:29:14.760687  519099 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:29:14.760740  519099 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:29:14.777584  519099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:29:14.790785  519099 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:29:14.790861  519099 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:29:14.808054  519099 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:29:14.826708  519099 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:29:14.911264  519099 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:29:15.001175  519099 docker.go:234] disabling docker service ...
	I1101 09:29:15.001247  519099 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:29:15.021563  519099 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:29:15.034872  519099 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:29:15.119011  519099 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:29:15.202571  519099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:29:15.216240  519099 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:29:15.231527  519099 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:29:15.231588  519099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.242082  519099 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:29:15.242151  519099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.251321  519099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.260363  519099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.269453  519099 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:29:15.278022  519099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.287220  519099 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.301441  519099 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:29:15.310783  519099 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:29:15.318554  519099 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:29:15.326193  519099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:29:15.401515  519099 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:29:15.512813  519099 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:29:15.512914  519099 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:29:15.517021  519099 start.go:564] Will wait 60s for crictl version
	I1101 09:29:15.517091  519099 ssh_runner.go:195] Run: which crictl
	I1101 09:29:15.520706  519099 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:29:15.547235  519099 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:29:15.547348  519099 ssh_runner.go:195] Run: crio --version
	I1101 09:29:15.576174  519099 ssh_runner.go:195] Run: crio --version
	I1101 09:29:15.606970  519099 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:29:15.607906  519099 cli_runner.go:164] Run: docker network inspect addons-050432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:29:15.625198  519099 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:29:15.629643  519099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:29:15.640409  519099 kubeadm.go:884] updating cluster {Name:addons-050432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-050432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:29:15.640585  519099 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:29:15.640659  519099 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:29:15.674281  519099 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:29:15.674305  519099 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:29:15.674353  519099 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:29:15.700405  519099 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:29:15.700431  519099 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:29:15.700440  519099 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 09:29:15.700585  519099 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-050432 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-050432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:29:15.700683  519099 ssh_runner.go:195] Run: crio config
	I1101 09:29:15.747539  519099 cni.go:84] Creating CNI manager for ""
	I1101 09:29:15.747565  519099 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:29:15.747587  519099 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:29:15.747612  519099 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-050432 NodeName:addons-050432 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:29:15.747735  519099 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-050432"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:29:15.747795  519099 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:29:15.756355  519099 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:29:15.756445  519099 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:29:15.764531  519099 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 09:29:15.777214  519099 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:29:15.792198  519099 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 09:29:15.805652  519099 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:29:15.809613  519099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:29:15.820042  519099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:29:15.900906  519099 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:29:15.925649  519099 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432 for IP: 192.168.49.2
	I1101 09:29:15.925679  519099 certs.go:195] generating shared ca certs ...
	I1101 09:29:15.925703  519099 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:15.926454  519099 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 09:29:16.022046  519099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt ...
	I1101 09:29:16.022082  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt: {Name:mk63d01b6c9e98cfdc58d5d995f045e109b91fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.022294  519099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key ...
	I1101 09:29:16.022311  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key: {Name:mk1c088d57a76aec79a4679eab5d0c5fe88c7b8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.022423  519099 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 09:29:16.215738  519099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt ...
	I1101 09:29:16.215772  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt: {Name:mkef3abd4e19242659ffaf335c2eefaa2d410609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.215990  519099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key ...
	I1101 09:29:16.216007  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key: {Name:mk2907020cf1dfded2b6a38c835cffcdebe60893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.216117  519099 certs.go:257] generating profile certs ...
	I1101 09:29:16.216181  519099 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.key
	I1101 09:29:16.216197  519099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt with IP's: []
	I1101 09:29:16.424239  519099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt ...
	I1101 09:29:16.424274  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: {Name:mkdcf555ffcc3ed403b4a9f8892c8fa924b9892d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.424493  519099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.key ...
	I1101 09:29:16.424508  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.key: {Name:mkff22c4720f09e526d72c814eb218b4abb731ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.424632  519099 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.key.11812e3f
	I1101 09:29:16.424663  519099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.crt.11812e3f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 09:29:16.725721  519099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.crt.11812e3f ...
	I1101 09:29:16.725753  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.crt.11812e3f: {Name:mka825e1368231832c84fcee1436857ed56519b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.725972  519099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.key.11812e3f ...
	I1101 09:29:16.726004  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.key.11812e3f: {Name:mk629b67aa2bc951d6bb8303aab04a470139f8ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.726129  519099 certs.go:382] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.crt.11812e3f -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.crt
	I1101 09:29:16.726217  519099 certs.go:386] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.key.11812e3f -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.key
	I1101 09:29:16.726266  519099 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.key
	I1101 09:29:16.726285  519099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.crt with IP's: []
	I1101 09:29:16.757345  519099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.crt ...
	I1101 09:29:16.757379  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.crt: {Name:mk68b51a8f1a074cfa06b541e0d862f35b908512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.757571  519099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.key ...
	I1101 09:29:16.757593  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.key: {Name:mk4461931e886dc045e8553c747238bb971866ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:16.757804  519099 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:29:16.757861  519099 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:29:16.757896  519099 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:29:16.757926  519099 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 09:29:16.758598  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:29:16.777877  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:29:16.796429  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:29:16.814138  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 09:29:16.831943  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:29:16.851436  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:29:16.869568  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:29:16.887792  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:29:16.905748  519099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:29:16.925744  519099 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:29:16.939270  519099 ssh_runner.go:195] Run: openssl version
	I1101 09:29:16.946032  519099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:29:16.957676  519099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:16.961937  519099 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:16.962005  519099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:16.996474  519099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:29:17.006120  519099 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:29:17.010195  519099 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:29:17.010254  519099 kubeadm.go:401] StartCluster: {Name:addons-050432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-050432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:29:17.010348  519099 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:29:17.010433  519099 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:29:17.039120  519099 cri.go:89] found id: ""
	I1101 09:29:17.039194  519099 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:29:17.048265  519099 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:29:17.057132  519099 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:29:17.057184  519099 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:29:17.065342  519099 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:29:17.065360  519099 kubeadm.go:158] found existing configuration files:
	
	I1101 09:29:17.065405  519099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:29:17.074375  519099 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:29:17.074430  519099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:29:17.082703  519099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:29:17.091120  519099 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:29:17.091182  519099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:29:17.100265  519099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:29:17.108402  519099 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:29:17.108480  519099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:29:17.115991  519099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:29:17.123999  519099 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:29:17.124051  519099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:29:17.131677  519099 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:29:17.169293  519099 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:29:17.169380  519099 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:29:17.190366  519099 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:29:17.190450  519099 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:29:17.190499  519099 kubeadm.go:319] OS: Linux
	I1101 09:29:17.190552  519099 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:29:17.190606  519099 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:29:17.190667  519099 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:29:17.190726  519099 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:29:17.190782  519099 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:29:17.190863  519099 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:29:17.190922  519099 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:29:17.190964  519099 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:29:17.251190  519099 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:29:17.251348  519099 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:29:17.251499  519099 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:29:17.258625  519099 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:29:17.260770  519099 out.go:252]   - Generating certificates and keys ...
	I1101 09:29:17.260893  519099 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:29:17.260989  519099 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:29:17.519595  519099 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:29:17.771540  519099 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:29:18.025644  519099 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:29:18.155239  519099 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:29:18.359515  519099 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:29:18.359635  519099 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-050432 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:29:18.437196  519099 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:29:18.437314  519099 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-050432 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:29:18.583509  519099 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:29:18.976111  519099 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:29:19.401990  519099 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:29:19.402068  519099 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:29:19.790963  519099 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:29:20.089943  519099 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:29:20.117512  519099 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:29:20.182474  519099 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:29:20.610354  519099 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:29:20.611243  519099 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:29:20.615193  519099 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:29:20.618516  519099 out.go:252]   - Booting up control plane ...
	I1101 09:29:20.618612  519099 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:29:20.618681  519099 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:29:20.618743  519099 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:29:20.631922  519099 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:29:20.632071  519099 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:29:20.640210  519099 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:29:20.640368  519099 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:29:20.640445  519099 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:29:20.737187  519099 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:29:20.737306  519099 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:29:21.239430  519099 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.93145ms
	I1101 09:29:21.248073  519099 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:29:21.248229  519099 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 09:29:21.248413  519099 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:29:21.248536  519099 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:29:22.630422  519099 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.382592516s
	I1101 09:29:23.706940  519099 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.45927635s
	I1101 09:29:25.249094  519099 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001308533s
	I1101 09:29:25.260779  519099 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:29:25.270996  519099 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:29:25.279462  519099 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:29:25.279723  519099 kubeadm.go:319] [mark-control-plane] Marking the node addons-050432 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:29:25.287121  519099 kubeadm.go:319] [bootstrap-token] Using token: 8a9tj0.a4ts8ocmz09rc9ud
	I1101 09:29:25.288240  519099 out.go:252]   - Configuring RBAC rules ...
	I1101 09:29:25.288376  519099 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:29:25.291978  519099 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:29:25.297156  519099 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:29:25.299926  519099 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:29:25.302604  519099 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:29:25.306079  519099 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:29:25.655059  519099 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:29:26.072485  519099 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:29:26.655038  519099 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:29:26.655820  519099 kubeadm.go:319] 
	I1101 09:29:26.655917  519099 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:29:26.655927  519099 kubeadm.go:319] 
	I1101 09:29:26.656018  519099 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:29:26.656028  519099 kubeadm.go:319] 
	I1101 09:29:26.656095  519099 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:29:26.656207  519099 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:29:26.656278  519099 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:29:26.656288  519099 kubeadm.go:319] 
	I1101 09:29:26.656349  519099 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:29:26.656359  519099 kubeadm.go:319] 
	I1101 09:29:26.656424  519099 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:29:26.656431  519099 kubeadm.go:319] 
	I1101 09:29:26.656481  519099 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:29:26.656560  519099 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:29:26.656658  519099 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:29:26.656670  519099 kubeadm.go:319] 
	I1101 09:29:26.656781  519099 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:29:26.656930  519099 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:29:26.656955  519099 kubeadm.go:319] 
	I1101 09:29:26.657101  519099 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8a9tj0.a4ts8ocmz09rc9ud \
	I1101 09:29:26.657246  519099 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 \
	I1101 09:29:26.657281  519099 kubeadm.go:319] 	--control-plane 
	I1101 09:29:26.657290  519099 kubeadm.go:319] 
	I1101 09:29:26.657443  519099 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:29:26.657453  519099 kubeadm.go:319] 
	I1101 09:29:26.657540  519099 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8a9tj0.a4ts8ocmz09rc9ud \
	I1101 09:29:26.657658  519099 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 
	I1101 09:29:26.660028  519099 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:29:26.660136  519099 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:29:26.660162  519099 cni.go:84] Creating CNI manager for ""
	I1101 09:29:26.660174  519099 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:29:26.661429  519099 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:29:26.662298  519099 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:29:26.666687  519099 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:29:26.666706  519099 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:29:26.680444  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:29:26.889268  519099 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:29:26.889443  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:26.889532  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-050432 minikube.k8s.io/updated_at=2025_11_01T09_29_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=addons-050432 minikube.k8s.io/primary=true
	I1101 09:29:26.899276  519099 ops.go:34] apiserver oom_adj: -16
	I1101 09:29:26.983283  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:27.483563  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:27.983385  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:28.484389  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:28.983602  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:29.483449  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:29.983521  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:30.483734  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:30.983959  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:31.483432  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:31.983750  519099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:29:32.048687  519099 kubeadm.go:1114] duration metric: took 5.159300553s to wait for elevateKubeSystemPrivileges
	I1101 09:29:32.048725  519099 kubeadm.go:403] duration metric: took 15.038477426s to StartCluster
	I1101 09:29:32.048745  519099 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:32.048877  519099 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 09:29:32.049228  519099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:32.049431  519099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:29:32.049432  519099 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:29:32.049456  519099 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 09:29:32.049564  519099 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-050432"
	I1101 09:29:32.049576  519099 addons.go:70] Setting registry=true in profile "addons-050432"
	I1101 09:29:32.049595  519099 addons.go:239] Setting addon registry=true in "addons-050432"
	I1101 09:29:32.049604  519099 addons.go:70] Setting default-storageclass=true in profile "addons-050432"
	I1101 09:29:32.049623  519099 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-050432"
	I1101 09:29:32.049630  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.049638  519099 addons.go:70] Setting ingress=true in profile "addons-050432"
	I1101 09:29:32.049627  519099 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-050432"
	I1101 09:29:32.049655  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.049654  519099 addons.go:70] Setting registry-creds=true in profile "addons-050432"
	I1101 09:29:32.049664  519099 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:32.049675  519099 addons.go:70] Setting metrics-server=true in profile "addons-050432"
	I1101 09:29:32.049676  519099 addons.go:239] Setting addon registry-creds=true in "addons-050432"
	I1101 09:29:32.049679  519099 addons.go:70] Setting storage-provisioner=true in profile "addons-050432"
	I1101 09:29:32.049689  519099 addons.go:239] Setting addon metrics-server=true in "addons-050432"
	I1101 09:29:32.049690  519099 addons.go:239] Setting addon storage-provisioner=true in "addons-050432"
	I1101 09:29:32.049649  519099 addons.go:239] Setting addon ingress=true in "addons-050432"
	I1101 09:29:32.049714  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.049718  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.049724  519099 addons.go:70] Setting volumesnapshots=true in profile "addons-050432"
	I1101 09:29:32.049736  519099 addons.go:239] Setting addon volumesnapshots=true in "addons-050432"
	I1101 09:29:32.049742  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.049759  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.050069  519099 addons.go:70] Setting volcano=true in profile "addons-050432"
	I1101 09:29:32.050098  519099 addons.go:239] Setting addon volcano=true in "addons-050432"
	I1101 09:29:32.050124  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.050236  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.050246  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.050251  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.050251  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.050256  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.049588  519099 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-050432"
	I1101 09:29:32.050279  519099 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-050432"
	I1101 09:29:32.050302  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.050602  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.050716  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.050947  519099 addons.go:70] Setting cloud-spanner=true in profile "addons-050432"
	I1101 09:29:32.050969  519099 addons.go:239] Setting addon cloud-spanner=true in "addons-050432"
	I1101 09:29:32.051006  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.051456  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.049631  519099 addons.go:70] Setting gcp-auth=true in profile "addons-050432"
	I1101 09:29:32.051679  519099 mustload.go:66] Loading cluster: addons-050432
	I1101 09:29:32.049667  519099 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-050432"
	I1101 09:29:32.052353  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.052604  519099 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:32.052857  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.049565  519099 addons.go:70] Setting yakd=true in profile "addons-050432"
	I1101 09:29:32.053489  519099 addons.go:239] Setting addon yakd=true in "addons-050432"
	I1101 09:29:32.053557  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.053645  519099 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-050432"
	I1101 09:29:32.053713  519099 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-050432"
	I1101 09:29:32.053744  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.054219  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.054237  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.049656  519099 addons.go:70] Setting ingress-dns=true in profile "addons-050432"
	I1101 09:29:32.054654  519099 addons.go:239] Setting addon ingress-dns=true in "addons-050432"
	I1101 09:29:32.054694  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.049622  519099 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-050432"
	I1101 09:29:32.050258  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.057110  519099 out.go:179] * Verifying Kubernetes components...
	I1101 09:29:32.049665  519099 addons.go:70] Setting inspektor-gadget=true in profile "addons-050432"
	I1101 09:29:32.049714  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.057813  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.057962  519099 addons.go:239] Setting addon inspektor-gadget=true in "addons-050432"
	I1101 09:29:32.058007  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.058497  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.059230  519099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:29:32.067869  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.068392  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.094856  519099 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:29:32.096123  519099 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:29:32.096156  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:29:32.096240  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.102616  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 09:29:32.103882  519099 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 09:29:32.103907  519099 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 09:29:32.104031  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	W1101 09:29:32.104689  519099 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 09:29:32.125276  519099 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-050432"
	I1101 09:29:32.126081  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.127217  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.138019  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.141045  519099 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 09:29:32.143367  519099 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1101 09:29:32.144054  519099 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 09:29:32.144089  519099 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 09:29:32.144159  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.144373  519099 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 09:29:32.144478  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 09:29:32.145417  519099 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 09:29:32.147954  519099 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:29:32.147977  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 09:29:32.148017  519099 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 09:29:32.148035  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 09:29:32.148039  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.148095  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.148242  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 09:29:32.148367  519099 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 09:29:32.151666  519099 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 09:29:32.151716  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:29:32.152130  519099 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 09:29:32.152353  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:29:32.152463  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.153172  519099 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 09:29:32.153188  519099 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 09:29:32.153238  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.156183  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:29:32.157964  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 09:29:32.159099  519099 addons.go:239] Setting addon default-storageclass=true in "addons-050432"
	I1101 09:29:32.159404  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:32.160209  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:32.165476  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 09:29:32.165925  519099 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 09:29:32.165544  519099 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 09:29:32.168707  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 09:29:32.170247  519099 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 09:29:32.174742  519099 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:29:32.174769  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 09:29:32.174848  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.175301  519099 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:29:32.175465  519099 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 09:29:32.175420  519099 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 09:29:32.176525  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 09:29:32.176548  519099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 09:29:32.176614  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.175367  519099 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:29:32.176906  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 09:29:32.176969  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.177033  519099 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:29:32.177127  519099 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 09:29:32.177162  519099 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 09:29:32.177174  519099 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 09:29:32.177238  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.178792  519099 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:29:32.178815  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 09:29:32.178876  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.184250  519099 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:29:32.184279  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 09:29:32.184354  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.202546  519099 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:29:32.204621  519099 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:29:32.205253  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.209067  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.214606  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.215058  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.218076  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.219692  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.225463  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.227902  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.229098  519099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:29:32.237722  519099 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 09:29:32.239076  519099 out.go:179]   - Using image docker.io/busybox:stable
	I1101 09:29:32.242118  519099 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:29:32.242146  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:29:32.242211  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:32.248302  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.256627  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.269365  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.272764  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.273343  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.282298  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.283923  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.293882  519099 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:29:32.294905  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:32.393544  519099 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 09:29:32.393646  519099 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 09:29:32.417458  519099 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 09:29:32.417494  519099 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 09:29:32.417704  519099 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 09:29:32.417729  519099 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 09:29:32.420970  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 09:29:32.422764  519099 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 09:29:32.422782  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 09:29:32.426324  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:29:32.428453  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:29:32.456659  519099 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:32.456689  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 09:29:32.460148  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:29:32.460402  519099 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 09:29:32.460423  519099 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 09:29:32.462021  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 09:29:32.462041  519099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 09:29:32.462261  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:29:32.465369  519099 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 09:29:32.465389  519099 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 09:29:32.474468  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:29:32.476146  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:29:32.478760  519099 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 09:29:32.478820  519099 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 09:29:32.478886  519099 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 09:29:32.478960  519099 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 09:29:32.490368  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:29:32.490563  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:32.499081  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:29:32.515053  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 09:29:32.515083  519099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 09:29:32.516090  519099 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 09:29:32.516109  519099 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 09:29:32.525976  519099 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:29:32.526070  519099 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 09:29:32.533551  519099 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:29:32.533592  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 09:29:32.536761  519099 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:29:32.536786  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 09:29:32.570125  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 09:29:32.570251  519099 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 09:29:32.592032  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:29:32.594347  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 09:29:32.594374  519099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 09:29:32.596531  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:29:32.625338  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:29:32.668310  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 09:29:32.668341  519099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 09:29:32.678381  519099 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:29:32.678470  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 09:29:32.715205  519099 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 09:29:32.718708  519099 node_ready.go:35] waiting up to 6m0s for node "addons-050432" to be "Ready" ...
	I1101 09:29:32.723892  519099 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 09:29:32.723970  519099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 09:29:32.742995  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:29:32.807577  519099 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 09:29:32.807687  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 09:29:32.898404  519099 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 09:29:32.898532  519099 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 09:29:32.958041  519099 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 09:29:32.958139  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 09:29:33.009343  519099 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 09:29:33.009375  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 09:29:33.081283  519099 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:29:33.081316  519099 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 09:29:33.123133  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:29:33.223673  519099 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-050432" context rescaled to 1 replicas
	I1101 09:29:33.476961  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.048463541s)
	I1101 09:29:33.714997  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.254808073s)
	I1101 09:29:33.715048  519099 addons.go:480] Verifying addon ingress=true in "addons-050432"
	I1101 09:29:33.715283  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.25299315s)
	I1101 09:29:33.715393  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.240887661s)
	I1101 09:29:33.715495  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.239315549s)
	I1101 09:29:33.715530  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.225133924s)
	I1101 09:29:33.715681  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.225096247s)
	W1101 09:29:33.715724  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:33.715759  519099 retry.go:31] will retry after 288.971351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:33.715728  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.21662131s)
	I1101 09:29:33.715799  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.123733521s)
	I1101 09:29:33.715824  519099 addons.go:480] Verifying addon registry=true in "addons-050432"
	I1101 09:29:33.715968  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.119406941s)
	I1101 09:29:33.715990  519099 addons.go:480] Verifying addon metrics-server=true in "addons-050432"
	I1101 09:29:33.716049  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.090674403s)
	I1101 09:29:33.717244  519099 out.go:179] * Verifying registry addon...
	I1101 09:29:33.717245  519099 out.go:179] * Verifying ingress addon...
	I1101 09:29:33.717244  519099 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-050432 service yakd-dashboard -n yakd-dashboard
	
	I1101 09:29:33.719128  519099 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 09:29:33.719480  519099 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 09:29:33.721700  519099 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:29:33.721730  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:33.723699  519099 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1101 09:29:33.746772  519099 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 09:29:33.746801  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:34.005350  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:34.184192  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.441074057s)
	W1101 09:29:34.184291  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:29:34.184329  519099 retry.go:31] will retry after 128.87656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:29:34.184610  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.061415264s)
	I1101 09:29:34.184634  519099 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-050432"
	I1101 09:29:34.187238  519099 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 09:29:34.189289  519099 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 09:29:34.193303  519099 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:29:34.193335  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:34.223156  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:34.223359  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:34.313497  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1101 09:29:34.633097  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:34.633126  519099 retry.go:31] will retry after 190.255939ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:34.693549  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:34.722579  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:34.722608  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:29:34.722666  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:34.823747  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:35.194104  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:35.222109  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:35.222138  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:35.693436  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:35.722410  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:35.722456  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:36.193022  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:36.221930  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:36.222251  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:36.692487  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:36.722815  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:36.722936  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:36.723182  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:36.823808  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.000014673s)
	W1101 09:29:36.823887  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:36.823905  519099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.510364751s)
	I1101 09:29:36.823917  519099 retry.go:31] will retry after 333.829323ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:37.158107  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:37.193531  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:37.222883  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:37.223028  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:37.693526  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:37.719962  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:37.719997  519099 retry.go:31] will retry after 545.128756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:37.722290  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:37.722509  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:38.192874  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:38.222591  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:38.222817  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:38.265916  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:38.693548  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:38.721847  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:38.722032  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:29:38.818309  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:38.818343  519099 retry.go:31] will retry after 1.706670957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:39.193189  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:39.221398  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:39.222108  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:39.222705  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:39.693954  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:39.722553  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:39.722810  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:39.760068  519099 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 09:29:39.760147  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:39.777599  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:39.885897  519099 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 09:29:39.900185  519099 addons.go:239] Setting addon gcp-auth=true in "addons-050432"
	I1101 09:29:39.900264  519099 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:29:39.900626  519099 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:29:39.918295  519099 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 09:29:39.918354  519099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:29:39.936168  519099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:29:40.036966  519099 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:29:40.038016  519099 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 09:29:40.038911  519099 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 09:29:40.038928  519099 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 09:29:40.052667  519099 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 09:29:40.052697  519099 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 09:29:40.066233  519099 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:29:40.066257  519099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 09:29:40.079778  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:29:40.193549  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:40.222354  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:40.222620  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:40.398822  519099 addons.go:480] Verifying addon gcp-auth=true in "addons-050432"
	I1101 09:29:40.399927  519099 out.go:179] * Verifying gcp-auth addon...
	I1101 09:29:40.401437  519099 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 09:29:40.405756  519099 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 09:29:40.405777  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:40.525977  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:40.692560  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:40.722424  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:40.722576  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:40.904606  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:29:41.084355  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:41.084390  519099 retry.go:31] will retry after 1.920037926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:41.192594  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:41.222524  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:41.222597  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:41.222728  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:41.404792  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:41.693227  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:41.721917  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:41.722093  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:41.905003  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:42.193251  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:42.222233  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:42.222287  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:42.405155  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:42.692299  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:42.722133  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:42.722355  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:42.906100  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:43.005223  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:43.193078  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:43.222158  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:43.222699  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:43.404759  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:29:43.569873  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:43.569910  519099 retry.go:31] will retry after 3.287494215s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:43.692870  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:43.721670  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:43.722390  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:43.722573  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:43.904322  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:44.192525  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:44.222399  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:44.222604  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:44.405587  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:44.692616  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:44.722538  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:44.722625  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:44.904487  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:45.193031  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:45.221779  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:45.222127  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:45.406065  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:45.693296  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:45.722144  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:45.722343  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:45.722406  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:45.904869  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:46.192986  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:46.221778  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:46.222725  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:46.404795  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:46.693352  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:46.722221  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:46.722438  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:46.858441  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:46.904668  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:47.193558  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:47.222660  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:47.222868  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:47.404822  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:29:47.415247  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:47.415279  519099 retry.go:31] will retry after 2.244820979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:47.692316  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:47.722024  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:47.722185  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:47.905241  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:48.192377  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:48.222281  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:48.222365  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:48.222545  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:48.404329  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:48.692497  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:48.722223  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:48.722411  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:48.904975  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:49.193290  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:49.223036  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:49.223281  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:49.405263  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:49.660781  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:49.693515  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:49.722411  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:49.722555  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:49.904061  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:50.192699  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:50.222249  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:50.222354  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:29:50.224044  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:50.224073  519099 retry.go:31] will retry after 7.889880289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:50.405281  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:50.692204  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:50.722085  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:50.722090  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:50.722325  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:50.904903  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:51.193637  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:51.222483  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:51.222592  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:51.404261  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:51.692100  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:51.721857  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:51.722355  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:51.905328  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:52.192285  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:52.221981  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:52.222047  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:52.405074  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:52.693090  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:52.721849  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:52.722715  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:52.904151  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:53.193170  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:29:53.222053  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:53.222204  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:53.222203  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:53.405076  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:53.693328  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:53.722252  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:53.722315  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:53.905049  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:54.193127  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:54.222141  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:54.222508  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:54.404822  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:54.692702  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:54.722553  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:54.722747  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:54.904425  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:55.192856  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:55.222414  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:55.222637  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:55.404964  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:55.693157  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:55.721814  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:55.721820  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:55.722311  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:55.905091  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:56.193198  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:56.221801  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:56.222477  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:56.404477  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:56.692517  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:56.722295  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:56.722489  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:56.905236  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:57.192580  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:57.222416  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:57.222432  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:57.405315  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:57.692028  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:57.721541  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:57.722251  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:57.904938  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:58.114241  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:29:58.192347  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:58.222169  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:29:58.222292  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:29:58.222342  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:58.405448  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:29:58.671703  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:58.671746  519099 retry.go:31] will retry after 6.232771096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:29:58.692695  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:58.722503  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:58.722663  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:58.904153  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:59.193406  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:59.222293  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:59.222369  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:59.405015  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:59.692924  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:29:59.722334  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:29:59.722445  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:29:59.904519  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:00.192401  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:00.222090  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:00.222266  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:00.405152  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:00.692322  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:00.724796  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:30:00.724958  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:30:00.726075  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:00.905231  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:01.192307  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:01.222450  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:01.222474  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:01.405391  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:01.692419  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:01.722274  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:01.722410  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:01.904318  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:02.192334  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:02.222249  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:02.222471  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:02.405339  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:02.692465  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:02.722382  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:02.722444  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:02.905521  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:03.192614  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:03.222647  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:03.222668  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:30:03.222712  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:30:03.404485  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:03.692370  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:03.722208  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:03.722304  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:03.905183  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:04.192141  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:04.222168  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:04.222322  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:04.405176  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:04.692801  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:04.722415  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:04.722488  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:04.904364  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:04.905394  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:05.192789  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:05.221428  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:05.222272  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:05.405554  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:30:05.468412  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:05.468447  519099 retry.go:31] will retry after 21.20891017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:05.692738  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:05.722614  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:30:05.722612  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:05.722774  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:05.904571  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:06.192975  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:06.221988  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:06.222318  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:06.405282  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:06.692237  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:06.722109  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:06.722307  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:06.905628  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:07.193432  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:07.222404  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:07.222602  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:07.404717  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:07.693019  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:07.722019  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:07.722510  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:07.904409  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:08.192274  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:08.222269  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:30:08.222308  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:30:08.222481  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:08.404672  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:08.692748  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:08.723000  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:08.723403  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:08.904933  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:09.193087  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:09.222031  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:09.222200  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:09.405046  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:09.693075  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:09.724213  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:09.724417  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:09.904795  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:10.192817  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:10.222592  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:10.222663  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:10.404279  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:10.692462  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:10.722322  519099 node_ready.go:57] node "addons-050432" has "Ready":"False" status (will retry)
	I1101 09:30:10.722343  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:10.722514  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:10.905363  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:11.192222  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:11.221856  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:11.222051  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:11.404730  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:11.692656  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:11.722535  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:11.722627  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:11.905409  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:12.192624  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:12.222634  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:12.222737  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:12.404659  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:12.692821  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:12.722383  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:12.722401  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:12.904373  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:13.192256  519099 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:30:13.192280  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:13.221502  519099 node_ready.go:49] node "addons-050432" is "Ready"
	I1101 09:30:13.221539  519099 node_ready.go:38] duration metric: took 40.502796006s for node "addons-050432" to be "Ready" ...
	I1101 09:30:13.221559  519099 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:30:13.221626  519099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:30:13.221662  519099 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:30:13.221693  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:13.224814  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:13.241664  519099 api_server.go:72] duration metric: took 41.192130567s to wait for apiserver process to appear ...
	I1101 09:30:13.241695  519099 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:30:13.241721  519099 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 09:30:13.246417  519099 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 09:30:13.247599  519099 api_server.go:141] control plane version: v1.34.1
	I1101 09:30:13.247636  519099 api_server.go:131] duration metric: took 5.933584ms to wait for apiserver health ...
	I1101 09:30:13.247651  519099 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:30:13.251391  519099 system_pods.go:59] 20 kube-system pods found
	I1101 09:30:13.251431  519099 system_pods.go:61] "amd-gpu-device-plugin-xj8r5" [faddc6aa-a08b-49f8-a58f-73afc131c1a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:30:13.251439  519099 system_pods.go:61] "coredns-66bc5c9577-q9w79" [dd4bc6c1-d8f6-4217-a47d-5702facf5cef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:13.251449  519099 system_pods.go:61] "csi-hostpath-attacher-0" [c92d19b5-53dd-4790-951b-f17708691fc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:13.251453  519099 system_pods.go:61] "csi-hostpath-resizer-0" [904e1210-26cb-4f3a-9f9d-792aa271e4c3] Pending
	I1101 09:30:13.251459  519099 system_pods.go:61] "csi-hostpathplugin-kgt98" [1bccf77b-7d33-4ddb-a97f-ac28fb830b08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:13.251466  519099 system_pods.go:61] "etcd-addons-050432" [ad234ee4-8ed9-4e39-8e48-0b4f7fc10842] Running
	I1101 09:30:13.251472  519099 system_pods.go:61] "kindnet-thccv" [58dd6cee-ae6d-46fc-9aae-8e15b061163e] Running
	I1101 09:30:13.251476  519099 system_pods.go:61] "kube-apiserver-addons-050432" [eb1bdccb-bbc5-42cf-92ec-72fefdd17257] Running
	I1101 09:30:13.251485  519099 system_pods.go:61] "kube-controller-manager-addons-050432" [56900646-78db-46eb-ae95-a13ff716c639] Running
	I1101 09:30:13.251493  519099 system_pods.go:61] "kube-ingress-dns-minikube" [f749b80c-82af-4955-b7f5-0ad7e1764b81] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:13.251497  519099 system_pods.go:61] "kube-proxy-4zrl2" [32920d60-2c32-4373-a7e6-e9ac35143118] Running
	I1101 09:30:13.251500  519099 system_pods.go:61] "kube-scheduler-addons-050432" [1326c1dd-4381-404e-b859-53575b0cd6e0] Running
	I1101 09:30:13.251505  519099 system_pods.go:61] "metrics-server-85b7d694d7-qbbqn" [30ad2449-3241-420e-809f-47ee08c65a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:13.251514  519099 system_pods.go:61] "nvidia-device-plugin-daemonset-585vh" [a77cc1f1-85cb-4703-a429-f8b4eb535dfc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:13.251521  519099 system_pods.go:61] "registry-6b586f9694-tdrzt" [a03b3b38-efc6-4b4e-ab7b-ca924913d632] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:13.251526  519099 system_pods.go:61] "registry-creds-764b6fb674-8s95r" [933f9696-6269-4a4a-b066-9f938b019f9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:13.251533  519099 system_pods.go:61] "registry-proxy-ftdnb" [3e5edf9d-0dac-458d-b44e-7564cf6619c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:13.251538  519099 system_pods.go:61] "snapshot-controller-7d9fbc56b8-l826d" [7fb8f85e-051c-40c4-b4a5-2c5c851f3270] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:13.251545  519099 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tqzj5" [3644f7bf-33cf-4c24-8422-99f20e501ed9] Pending
	I1101 09:30:13.251550  519099 system_pods.go:61] "storage-provisioner" [873335ec-19d5-4ffd-a470-a5d15051fad9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:13.251558  519099 system_pods.go:74] duration metric: took 3.899579ms to wait for pod list to return data ...
	I1101 09:30:13.251567  519099 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:30:13.253888  519099 default_sa.go:45] found service account: "default"
	I1101 09:30:13.253917  519099 default_sa.go:55] duration metric: took 2.339005ms for default service account to be created ...
	I1101 09:30:13.253926  519099 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:30:13.258503  519099 system_pods.go:86] 20 kube-system pods found
	I1101 09:30:13.258539  519099 system_pods.go:89] "amd-gpu-device-plugin-xj8r5" [faddc6aa-a08b-49f8-a58f-73afc131c1a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:30:13.258547  519099 system_pods.go:89] "coredns-66bc5c9577-q9w79" [dd4bc6c1-d8f6-4217-a47d-5702facf5cef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:13.258554  519099 system_pods.go:89] "csi-hostpath-attacher-0" [c92d19b5-53dd-4790-951b-f17708691fc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:13.258558  519099 system_pods.go:89] "csi-hostpath-resizer-0" [904e1210-26cb-4f3a-9f9d-792aa271e4c3] Pending
	I1101 09:30:13.258563  519099 system_pods.go:89] "csi-hostpathplugin-kgt98" [1bccf77b-7d33-4ddb-a97f-ac28fb830b08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:13.258567  519099 system_pods.go:89] "etcd-addons-050432" [ad234ee4-8ed9-4e39-8e48-0b4f7fc10842] Running
	I1101 09:30:13.258571  519099 system_pods.go:89] "kindnet-thccv" [58dd6cee-ae6d-46fc-9aae-8e15b061163e] Running
	I1101 09:30:13.258575  519099 system_pods.go:89] "kube-apiserver-addons-050432" [eb1bdccb-bbc5-42cf-92ec-72fefdd17257] Running
	I1101 09:30:13.258578  519099 system_pods.go:89] "kube-controller-manager-addons-050432" [56900646-78db-46eb-ae95-a13ff716c639] Running
	I1101 09:30:13.258584  519099 system_pods.go:89] "kube-ingress-dns-minikube" [f749b80c-82af-4955-b7f5-0ad7e1764b81] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:13.258590  519099 system_pods.go:89] "kube-proxy-4zrl2" [32920d60-2c32-4373-a7e6-e9ac35143118] Running
	I1101 09:30:13.258594  519099 system_pods.go:89] "kube-scheduler-addons-050432" [1326c1dd-4381-404e-b859-53575b0cd6e0] Running
	I1101 09:30:13.258601  519099 system_pods.go:89] "metrics-server-85b7d694d7-qbbqn" [30ad2449-3241-420e-809f-47ee08c65a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:13.258607  519099 system_pods.go:89] "nvidia-device-plugin-daemonset-585vh" [a77cc1f1-85cb-4703-a429-f8b4eb535dfc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:13.258615  519099 system_pods.go:89] "registry-6b586f9694-tdrzt" [a03b3b38-efc6-4b4e-ab7b-ca924913d632] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:13.258619  519099 system_pods.go:89] "registry-creds-764b6fb674-8s95r" [933f9696-6269-4a4a-b066-9f938b019f9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:13.258632  519099 system_pods.go:89] "registry-proxy-ftdnb" [3e5edf9d-0dac-458d-b44e-7564cf6619c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:13.258641  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l826d" [7fb8f85e-051c-40c4-b4a5-2c5c851f3270] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:13.258650  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqzj5" [3644f7bf-33cf-4c24-8422-99f20e501ed9] Pending
	I1101 09:30:13.258657  519099 system_pods.go:89] "storage-provisioner" [873335ec-19d5-4ffd-a470-a5d15051fad9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:13.258681  519099 retry.go:31] will retry after 270.622651ms: missing components: kube-dns
	I1101 09:30:13.409195  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:13.536278  519099 system_pods.go:86] 20 kube-system pods found
	I1101 09:30:13.536324  519099 system_pods.go:89] "amd-gpu-device-plugin-xj8r5" [faddc6aa-a08b-49f8-a58f-73afc131c1a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:30:13.536334  519099 system_pods.go:89] "coredns-66bc5c9577-q9w79" [dd4bc6c1-d8f6-4217-a47d-5702facf5cef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:13.536346  519099 system_pods.go:89] "csi-hostpath-attacher-0" [c92d19b5-53dd-4790-951b-f17708691fc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:13.536353  519099 system_pods.go:89] "csi-hostpath-resizer-0" [904e1210-26cb-4f3a-9f9d-792aa271e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:13.536361  519099 system_pods.go:89] "csi-hostpathplugin-kgt98" [1bccf77b-7d33-4ddb-a97f-ac28fb830b08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:13.536366  519099 system_pods.go:89] "etcd-addons-050432" [ad234ee4-8ed9-4e39-8e48-0b4f7fc10842] Running
	I1101 09:30:13.536372  519099 system_pods.go:89] "kindnet-thccv" [58dd6cee-ae6d-46fc-9aae-8e15b061163e] Running
	I1101 09:30:13.536378  519099 system_pods.go:89] "kube-apiserver-addons-050432" [eb1bdccb-bbc5-42cf-92ec-72fefdd17257] Running
	I1101 09:30:13.536392  519099 system_pods.go:89] "kube-controller-manager-addons-050432" [56900646-78db-46eb-ae95-a13ff716c639] Running
	I1101 09:30:13.536401  519099 system_pods.go:89] "kube-ingress-dns-minikube" [f749b80c-82af-4955-b7f5-0ad7e1764b81] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:13.536406  519099 system_pods.go:89] "kube-proxy-4zrl2" [32920d60-2c32-4373-a7e6-e9ac35143118] Running
	I1101 09:30:13.536414  519099 system_pods.go:89] "kube-scheduler-addons-050432" [1326c1dd-4381-404e-b859-53575b0cd6e0] Running
	I1101 09:30:13.536421  519099 system_pods.go:89] "metrics-server-85b7d694d7-qbbqn" [30ad2449-3241-420e-809f-47ee08c65a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:13.536430  519099 system_pods.go:89] "nvidia-device-plugin-daemonset-585vh" [a77cc1f1-85cb-4703-a429-f8b4eb535dfc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:13.536437  519099 system_pods.go:89] "registry-6b586f9694-tdrzt" [a03b3b38-efc6-4b4e-ab7b-ca924913d632] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:13.536446  519099 system_pods.go:89] "registry-creds-764b6fb674-8s95r" [933f9696-6269-4a4a-b066-9f938b019f9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:13.536455  519099 system_pods.go:89] "registry-proxy-ftdnb" [3e5edf9d-0dac-458d-b44e-7564cf6619c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:13.536463  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l826d" [7fb8f85e-051c-40c4-b4a5-2c5c851f3270] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:13.536471  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqzj5" [3644f7bf-33cf-4c24-8422-99f20e501ed9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:13.536479  519099 system_pods.go:89] "storage-provisioner" [873335ec-19d5-4ffd-a470-a5d15051fad9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:13.536500  519099 retry.go:31] will retry after 380.716652ms: missing components: kube-dns
	I1101 09:30:13.693990  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:13.723011  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:13.723131  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:13.905437  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:13.922460  519099 system_pods.go:86] 20 kube-system pods found
	I1101 09:30:13.922503  519099 system_pods.go:89] "amd-gpu-device-plugin-xj8r5" [faddc6aa-a08b-49f8-a58f-73afc131c1a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:30:13.922515  519099 system_pods.go:89] "coredns-66bc5c9577-q9w79" [dd4bc6c1-d8f6-4217-a47d-5702facf5cef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:13.922526  519099 system_pods.go:89] "csi-hostpath-attacher-0" [c92d19b5-53dd-4790-951b-f17708691fc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:13.922537  519099 system_pods.go:89] "csi-hostpath-resizer-0" [904e1210-26cb-4f3a-9f9d-792aa271e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:13.922547  519099 system_pods.go:89] "csi-hostpathplugin-kgt98" [1bccf77b-7d33-4ddb-a97f-ac28fb830b08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:13.922554  519099 system_pods.go:89] "etcd-addons-050432" [ad234ee4-8ed9-4e39-8e48-0b4f7fc10842] Running
	I1101 09:30:13.922561  519099 system_pods.go:89] "kindnet-thccv" [58dd6cee-ae6d-46fc-9aae-8e15b061163e] Running
	I1101 09:30:13.922567  519099 system_pods.go:89] "kube-apiserver-addons-050432" [eb1bdccb-bbc5-42cf-92ec-72fefdd17257] Running
	I1101 09:30:13.922577  519099 system_pods.go:89] "kube-controller-manager-addons-050432" [56900646-78db-46eb-ae95-a13ff716c639] Running
	I1101 09:30:13.922587  519099 system_pods.go:89] "kube-ingress-dns-minikube" [f749b80c-82af-4955-b7f5-0ad7e1764b81] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:13.922596  519099 system_pods.go:89] "kube-proxy-4zrl2" [32920d60-2c32-4373-a7e6-e9ac35143118] Running
	I1101 09:30:13.922602  519099 system_pods.go:89] "kube-scheduler-addons-050432" [1326c1dd-4381-404e-b859-53575b0cd6e0] Running
	I1101 09:30:13.922614  519099 system_pods.go:89] "metrics-server-85b7d694d7-qbbqn" [30ad2449-3241-420e-809f-47ee08c65a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:13.922626  519099 system_pods.go:89] "nvidia-device-plugin-daemonset-585vh" [a77cc1f1-85cb-4703-a429-f8b4eb535dfc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:13.922637  519099 system_pods.go:89] "registry-6b586f9694-tdrzt" [a03b3b38-efc6-4b4e-ab7b-ca924913d632] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:13.922645  519099 system_pods.go:89] "registry-creds-764b6fb674-8s95r" [933f9696-6269-4a4a-b066-9f938b019f9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:13.922654  519099 system_pods.go:89] "registry-proxy-ftdnb" [3e5edf9d-0dac-458d-b44e-7564cf6619c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:13.922662  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l826d" [7fb8f85e-051c-40c4-b4a5-2c5c851f3270] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:13.922676  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqzj5" [3644f7bf-33cf-4c24-8422-99f20e501ed9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:13.922684  519099 system_pods.go:89] "storage-provisioner" [873335ec-19d5-4ffd-a470-a5d15051fad9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:30:13.922708  519099 retry.go:31] will retry after 293.8172ms: missing components: kube-dns
	I1101 09:30:14.193938  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:14.221890  519099 system_pods.go:86] 20 kube-system pods found
	I1101 09:30:14.221935  519099 system_pods.go:89] "amd-gpu-device-plugin-xj8r5" [faddc6aa-a08b-49f8-a58f-73afc131c1a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:30:14.221944  519099 system_pods.go:89] "coredns-66bc5c9577-q9w79" [dd4bc6c1-d8f6-4217-a47d-5702facf5cef] Running
	I1101 09:30:14.221957  519099 system_pods.go:89] "csi-hostpath-attacher-0" [c92d19b5-53dd-4790-951b-f17708691fc8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:30:14.221968  519099 system_pods.go:89] "csi-hostpath-resizer-0" [904e1210-26cb-4f3a-9f9d-792aa271e4c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:30:14.221985  519099 system_pods.go:89] "csi-hostpathplugin-kgt98" [1bccf77b-7d33-4ddb-a97f-ac28fb830b08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:30:14.221992  519099 system_pods.go:89] "etcd-addons-050432" [ad234ee4-8ed9-4e39-8e48-0b4f7fc10842] Running
	I1101 09:30:14.221998  519099 system_pods.go:89] "kindnet-thccv" [58dd6cee-ae6d-46fc-9aae-8e15b061163e] Running
	I1101 09:30:14.222006  519099 system_pods.go:89] "kube-apiserver-addons-050432" [eb1bdccb-bbc5-42cf-92ec-72fefdd17257] Running
	I1101 09:30:14.222022  519099 system_pods.go:89] "kube-controller-manager-addons-050432" [56900646-78db-46eb-ae95-a13ff716c639] Running
	I1101 09:30:14.222032  519099 system_pods.go:89] "kube-ingress-dns-minikube" [f749b80c-82af-4955-b7f5-0ad7e1764b81] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:30:14.222037  519099 system_pods.go:89] "kube-proxy-4zrl2" [32920d60-2c32-4373-a7e6-e9ac35143118] Running
	I1101 09:30:14.222043  519099 system_pods.go:89] "kube-scheduler-addons-050432" [1326c1dd-4381-404e-b859-53575b0cd6e0] Running
	I1101 09:30:14.222051  519099 system_pods.go:89] "metrics-server-85b7d694d7-qbbqn" [30ad2449-3241-420e-809f-47ee08c65a39] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:30:14.222059  519099 system_pods.go:89] "nvidia-device-plugin-daemonset-585vh" [a77cc1f1-85cb-4703-a429-f8b4eb535dfc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:30:14.222069  519099 system_pods.go:89] "registry-6b586f9694-tdrzt" [a03b3b38-efc6-4b4e-ab7b-ca924913d632] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:30:14.222078  519099 system_pods.go:89] "registry-creds-764b6fb674-8s95r" [933f9696-6269-4a4a-b066-9f938b019f9b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:30:14.222086  519099 system_pods.go:89] "registry-proxy-ftdnb" [3e5edf9d-0dac-458d-b44e-7564cf6619c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:30:14.222094  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l826d" [7fb8f85e-051c-40c4-b4a5-2c5c851f3270] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:14.222104  519099 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tqzj5" [3644f7bf-33cf-4c24-8422-99f20e501ed9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:30:14.222110  519099 system_pods.go:89] "storage-provisioner" [873335ec-19d5-4ffd-a470-a5d15051fad9] Running
	I1101 09:30:14.222121  519099 system_pods.go:126] duration metric: took 968.188399ms to wait for k8s-apps to be running ...
	I1101 09:30:14.222136  519099 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:30:14.222200  519099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:30:14.222821  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:14.222986  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:14.238578  519099 system_svc.go:56] duration metric: took 16.431621ms WaitForService to wait for kubelet
	I1101 09:30:14.238616  519099 kubeadm.go:587] duration metric: took 42.1890915s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:30:14.238646  519099 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:30:14.242002  519099 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:30:14.242046  519099 node_conditions.go:123] node cpu capacity is 8
	I1101 09:30:14.242071  519099 node_conditions.go:105] duration metric: took 3.417938ms to run NodePressure ...
	I1101 09:30:14.242088  519099 start.go:242] waiting for startup goroutines ...
	I1101 09:30:14.405393  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:14.693015  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:14.722965  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:14.722995  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:14.905324  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:15.193246  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:15.223242  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:15.223388  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:15.405742  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:15.693388  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:15.722490  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:15.722534  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:15.904501  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:16.195075  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:16.224906  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:16.225753  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:16.405403  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:16.693257  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:16.723924  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:16.725159  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:16.906198  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:17.194468  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:17.222982  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:17.223072  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:17.405903  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:17.694138  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:17.723199  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:17.723250  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:17.905607  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:18.193703  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:18.222882  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:18.223085  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:18.405302  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:18.693103  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:18.723374  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:18.723438  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:18.905783  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:19.193876  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:19.223355  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:19.223376  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:19.405485  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:19.694283  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:19.723526  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:19.723739  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:19.904654  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:20.193460  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:20.223493  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:20.223496  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:20.405287  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:20.692687  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:20.722633  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:20.722737  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:20.905080  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:21.194215  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:21.222951  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:21.223116  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:21.404934  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:21.694961  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:21.723272  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:21.723284  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:21.905336  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:22.194521  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:22.222762  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:22.222891  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:22.404903  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:22.694306  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:22.724979  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:22.725373  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:22.904727  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:23.193516  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:23.222865  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:23.222913  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:23.404766  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:23.712210  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:23.722993  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:23.723074  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:23.905641  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:24.193822  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:24.222515  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:24.222576  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:24.404445  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:24.692741  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:24.722868  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:24.722982  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:24.905132  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:25.194097  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:25.223351  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:25.223409  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:25.406652  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:25.694048  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:25.723701  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:25.723744  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:25.905623  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:26.193879  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:26.223341  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:26.223633  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:26.405820  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:26.677940  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:26.693486  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:26.722747  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:26.722826  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:26.904663  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:27.263748  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:27.263823  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:27.263918  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:30:27.371364  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:27.371402  519099 retry.go:31] will retry after 27.055947224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:27.405157  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:27.692857  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:27.722729  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:27.722941  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:27.904823  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:28.193429  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:28.222255  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:28.222888  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:28.404943  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:28.693200  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:28.723042  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:28.723086  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:28.904918  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:29.193678  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:29.222763  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:29.222901  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:29.405122  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:29.694635  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:29.722477  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:29.722673  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:29.905002  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:30.193550  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:30.222656  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:30.222672  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:30.404490  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:30.692741  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:30.722737  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:30.722747  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:30.905449  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:31.192578  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:31.222467  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:31.222508  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:31.405308  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:31.693048  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:31.722737  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:31.722954  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:31.905176  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:32.192727  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:32.222776  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:32.222878  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:32.404663  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:32.693013  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:32.722954  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:32.723024  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:32.905692  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:33.193448  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:33.222463  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:33.222648  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:33.405063  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:33.693980  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:33.722926  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:33.723067  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:33.907412  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:34.193684  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:34.223089  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:34.223324  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:34.405175  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:34.693944  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:34.722957  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:34.723032  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:34.905308  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:35.193188  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:35.222938  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:35.223128  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:35.405239  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:35.693371  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:35.723858  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:35.723858  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:35.905014  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:36.194046  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:36.295139  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:36.295315  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:36.405102  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:36.693996  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:36.723159  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:36.723169  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:36.904957  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:37.216752  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:37.223278  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:37.223364  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:37.405128  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:37.692718  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:37.722712  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:37.722828  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:37.905135  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:38.193771  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:38.222472  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:38.222531  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:38.404698  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:38.693144  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:38.722829  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:38.722990  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:38.904582  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:39.193096  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:39.223518  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:39.223955  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:39.405254  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:39.692958  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:39.723476  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:39.723787  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:39.907049  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:40.194121  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:40.224626  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:40.224722  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:40.405011  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:40.693600  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:40.722416  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:40.722431  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:30:40.905153  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:41.193351  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:41.222119  519099 kapi.go:107] duration metric: took 1m7.502983362s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 09:30:41.222874  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:41.404975  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:41.693530  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:41.724335  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:41.905486  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:42.192774  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:42.222531  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:42.404254  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:42.692673  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:42.722478  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:42.905313  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:43.221431  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:43.328592  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:43.568814  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:43.693467  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:43.794547  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:43.904547  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:44.192781  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:44.222584  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:44.404303  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:44.693159  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:44.723217  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:44.905797  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:45.195599  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:45.225569  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:45.406874  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:45.693064  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:45.722889  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:45.905366  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:46.193182  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:46.223061  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:46.405549  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:46.693315  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:46.723496  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:46.905942  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:47.194094  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:47.294390  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:47.405126  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:47.693879  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:47.722711  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:47.905089  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:48.193573  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:48.224029  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:48.405613  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:48.693327  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:48.723545  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:48.904768  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:49.193311  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:49.223827  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:49.405530  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:49.693748  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:49.722801  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:49.905325  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:50.192877  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:50.222431  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:50.405320  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:50.694403  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:50.723849  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:50.905570  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:51.193301  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:51.223378  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:51.405223  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:51.693011  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:51.723230  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:51.906175  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:52.192674  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:52.224022  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:52.404945  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:52.693268  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:52.723317  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:52.907312  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:53.192870  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:53.222827  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:53.404035  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:53.693489  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:53.723081  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:53.905714  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:54.193415  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:54.223586  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:54.404332  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:54.428419  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:30:54.693236  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:54.723366  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:54.905329  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:30:55.079647  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:55.079687  519099 retry.go:31] will retry after 27.58208303s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:30:55.193566  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:55.223874  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:55.405619  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:55.693458  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:55.723241  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:55.907344  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:30:56.195453  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:56.224200  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:56.406196  519099 kapi.go:107] duration metric: took 1m16.004752522s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 09:30:56.407774  519099 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-050432 cluster.
	I1101 09:30:56.409039  519099 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 09:30:56.410097  519099 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 09:30:56.693798  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:56.724024  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:57.193570  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:57.223815  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:57.693999  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:57.723203  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:58.194563  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:58.223613  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:58.693541  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:58.723828  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:59.193334  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:59.223185  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:30:59.692920  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:30:59.722532  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:00.193461  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:00.223382  519099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:31:00.693222  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:00.723727  519099 kapi.go:107] duration metric: took 1m27.004239809s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 09:31:01.193544  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:01.694525  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:02.193085  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:02.693606  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:03.192888  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:03.694202  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:04.193265  519099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:31:04.693910  519099 kapi.go:107] duration metric: took 1m30.504619765s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 09:31:22.662127  519099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 09:31:23.224180  519099 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:31:23.224304  519099 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:31:23.227169  519099 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, registry-creds, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1101 09:31:23.228173  519099 addons.go:515] duration metric: took 1m51.178718833s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns registry-creds amd-gpu-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1101 09:31:23.228221  519099 start.go:247] waiting for cluster config update ...
	I1101 09:31:23.228247  519099 start.go:256] writing updated cluster config ...
	I1101 09:31:23.228560  519099 ssh_runner.go:195] Run: rm -f paused
	I1101 09:31:23.232675  519099 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:31:23.236685  519099 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q9w79" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.241184  519099 pod_ready.go:94] pod "coredns-66bc5c9577-q9w79" is "Ready"
	I1101 09:31:23.241208  519099 pod_ready.go:86] duration metric: took 4.495258ms for pod "coredns-66bc5c9577-q9w79" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.243307  519099 pod_ready.go:83] waiting for pod "etcd-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.247712  519099 pod_ready.go:94] pod "etcd-addons-050432" is "Ready"
	I1101 09:31:23.247734  519099 pod_ready.go:86] duration metric: took 4.401171ms for pod "etcd-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.249728  519099 pod_ready.go:83] waiting for pod "kube-apiserver-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.253667  519099 pod_ready.go:94] pod "kube-apiserver-addons-050432" is "Ready"
	I1101 09:31:23.253692  519099 pod_ready.go:86] duration metric: took 3.942504ms for pod "kube-apiserver-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.255730  519099 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.637293  519099 pod_ready.go:94] pod "kube-controller-manager-addons-050432" is "Ready"
	I1101 09:31:23.637324  519099 pod_ready.go:86] duration metric: took 381.571204ms for pod "kube-controller-manager-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:23.836962  519099 pod_ready.go:83] waiting for pod "kube-proxy-4zrl2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:24.236469  519099 pod_ready.go:94] pod "kube-proxy-4zrl2" is "Ready"
	I1101 09:31:24.236509  519099 pod_ready.go:86] duration metric: took 399.518195ms for pod "kube-proxy-4zrl2" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:24.437477  519099 pod_ready.go:83] waiting for pod "kube-scheduler-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:24.836740  519099 pod_ready.go:94] pod "kube-scheduler-addons-050432" is "Ready"
	I1101 09:31:24.836768  519099 pod_ready.go:86] duration metric: took 399.265438ms for pod "kube-scheduler-addons-050432" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:31:24.836780  519099 pod_ready.go:40] duration metric: took 1.604071211s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:31:24.885155  519099 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:31:24.886464  519099 out.go:179] * Done! kubectl is now configured to use "addons-050432" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:31:45 addons-050432 crio[764]: time="2025-11-01T09:31:45.461123626Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:1b7df28ab704e09881153fb1f18ff02c77477493b8f096854625d9fa0a60fd68 UID:931ae887-0378-46e7-b98b-e7c3b4b4d0da NetNS:/var/run/netns/1060b093-1092-43d5-8734-9dd1faf954d8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000b9a738}] Aliases:map[]}"
	Nov 01 09:31:45 addons-050432 crio[764]: time="2025-11-01T09:31:45.461165634Z" level=info msg="Adding pod default_registry-test to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:31:45 addons-050432 crio[764]: time="2025-11-01T09:31:45.472914665Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:1b7df28ab704e09881153fb1f18ff02c77477493b8f096854625d9fa0a60fd68 UID:931ae887-0378-46e7-b98b-e7c3b4b4d0da NetNS:/var/run/netns/1060b093-1092-43d5-8734-9dd1faf954d8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000b9a738}] Aliases:map[]}"
	Nov 01 09:31:45 addons-050432 crio[764]: time="2025-11-01T09:31:45.473081174Z" level=info msg="Checking pod default_registry-test for CNI network kindnet (type=ptp)"
	Nov 01 09:31:45 addons-050432 crio[764]: time="2025-11-01T09:31:45.474030512Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:31:45 addons-050432 crio[764]: time="2025-11-01T09:31:45.475030808Z" level=info msg="Ran pod sandbox 1b7df28ab704e09881153fb1f18ff02c77477493b8f096854625d9fa0a60fd68 with infra container: default/registry-test/POD" id=2ca778cc-39cc-4739-ae0c-20ff3e867994 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:31:45 addons-050432 crio[764]: time="2025-11-01T09:31:45.476733951Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:latest" id=cd47ebde-2a92-4d4b-a08c-1d2c2229d599 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:31:45 addons-050432 crio[764]: time="2025-11-01T09:31:45.478632784Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:latest\""
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.168865877Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5/POD" id=645ad76c-d314-4c3f-a7d3-e57638c7eda8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.16899037Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.176149605Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5 Namespace:local-path-storage ID:2c577abf76fde487829323fc2a61299fab0573e9cc0bae652cc8020f0fada2be UID:9b332241-9209-4a66-9c90-1a2c3b91deee NetNS:/var/run/netns/68b6878a-1b43-4ffb-b54a-9d121999dff0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000b9aeb0}] Aliases:map[]}"
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.176196542Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5 to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.188516976Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5 Namespace:local-path-storage ID:2c577abf76fde487829323fc2a61299fab0573e9cc0bae652cc8020f0fada2be UID:9b332241-9209-4a66-9c90-1a2c3b91deee NetNS:/var/run/netns/68b6878a-1b43-4ffb-b54a-9d121999dff0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000b9aeb0}] Aliases:map[]}"
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.188714494Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5 for CNI network kindnet (type=ptp)"
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.189683135Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.190608959Z" level=info msg="Ran pod sandbox 2c577abf76fde487829323fc2a61299fab0573e9cc0bae652cc8020f0fada2be with infra container: local-path-storage/helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5/POD" id=645ad76c-d314-4c3f-a7d3-e57638c7eda8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.192087732Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=4dcf10cd-d6af-41de-a905-6eecf178d545 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.193969558Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=f4b0f84b-80ef-4769-8962-4e782fb7087e name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.198308248Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5/helper-pod" id=8ca64f03-6286-4a5f-93d5-bac2f49d23bc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.19844965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.207243287Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.207817953Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.246339911Z" level=info msg="Created container ded813cd3e10213214b38f95285e08413285ad369eef5916ff071d79f012d7a6: local-path-storage/helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5/helper-pod" id=8ca64f03-6286-4a5f-93d5-bac2f49d23bc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.247406383Z" level=info msg="Starting container: ded813cd3e10213214b38f95285e08413285ad369eef5916ff071d79f012d7a6" id=59605d29-95e9-4388-873e-3d77b0dd89bc name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:31:46 addons-050432 crio[764]: time="2025-11-01T09:31:46.249236135Z" level=info msg="Started container" PID=7054 containerID=ded813cd3e10213214b38f95285e08413285ad369eef5916ff071d79f012d7a6 description=local-path-storage/helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5/helper-pod id=59605d29-95e9-4388-873e-3d77b0dd89bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c577abf76fde487829323fc2a61299fab0573e9cc0bae652cc8020f0fada2be
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	ded813cd3e102       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                                             Less than a second ago   Exited              helper-pod                               0                   2c577abf76fde       helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5   local-path-storage
	deab6a20d382c       docker.io/library/busybox@sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737                                            3 seconds ago            Exited              busybox                                  0                   f8c2dd1905e07       test-local-path                                              default
	a6e5bf05eee4f       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            9 seconds ago            Exited              helper-pod                               0                   4e641c06566fc       helper-pod-create-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5   local-path-storage
	f9a609afe9466       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          18 seconds ago           Running             busybox                                  0                   b05a5d83dd35c       busybox                                                      default
	0cd2226cd22ce       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          43 seconds ago           Running             csi-snapshotter                          0                   6198459e9a1ae       csi-hostpathplugin-kgt98                                     kube-system
	ebc6c01c90c2f       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          45 seconds ago           Running             csi-provisioner                          0                   6198459e9a1ae       csi-hostpathplugin-kgt98                                     kube-system
	81c14cf7ac31f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            46 seconds ago           Running             liveness-probe                           0                   6198459e9a1ae       csi-hostpathplugin-kgt98                                     kube-system
	ba4952c9861da       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           46 seconds ago           Running             hostpath                                 0                   6198459e9a1ae       csi-hostpathplugin-kgt98                                     kube-system
	156252f8ed3d9       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             47 seconds ago           Running             controller                               0                   3ddb99ff3796f       ingress-nginx-controller-675c5ddd98-z8482                    ingress-nginx
	5ef835cd52f21       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 51 seconds ago           Running             gcp-auth                                 0                   6323bd8989773       gcp-auth-78565c9fb4-ll292                                    gcp-auth
	706e94c8f54a5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            54 seconds ago           Running             gadget                                   0                   f870604cf07c7       gadget-hcssg                                                 gadget
	f18ba15647b79       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                57 seconds ago           Running             node-driver-registrar                    0                   6198459e9a1ae       csi-hostpathplugin-kgt98                                     kube-system
	9138926a4ebf5       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             58 seconds ago           Running             local-path-provisioner                   0                   a4fa77bbf1181       local-path-provisioner-648f6765c9-vwzcp                      local-path-storage
	b24762f9cf57c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago       Running             csi-attacher                             0                   d6e867b734bad       csi-hostpath-attacher-0                                      kube-system
	43b485de84b03       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     About a minute ago       Running             nvidia-device-plugin-ctr                 0                   7dcf20edbc88d       nvidia-device-plugin-daemonset-585vh                         kube-system
	1b71e4eeb4433       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago       Running             volume-snapshot-controller               0                   f6955374c5ff4       snapshot-controller-7d9fbc56b8-tqzj5                         kube-system
	c4071d2f7fecc       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     About a minute ago       Running             amd-gpu-device-plugin                    0                   809aa23cc33d7       amd-gpu-device-plugin-xj8r5                                  kube-system
	c19b6a74eec58       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              About a minute ago       Running             registry-proxy                           0                   eb65c7dfa68e8       registry-proxy-ftdnb                                         kube-system
	47018dafba328       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago       Running             csi-resizer                              0                   6a70f7e130670       csi-hostpath-resizer-0                                       kube-system
	39e74546adc34       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago       Running             csi-external-health-monitor-controller   0                   6198459e9a1ae       csi-hostpathplugin-kgt98                                     kube-system
	174b86c0a84a5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago       Exited              patch                                    0                   7be699f21f136       ingress-nginx-admission-patch-8r4w5                          ingress-nginx
	785e2c163a99a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago       Exited              create                                   0                   ec1b0999ab9eb       ingress-nginx-admission-create-6l9tg                         ingress-nginx
	c898b96b19d0d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago       Running             volume-snapshot-controller               0                   46613574c817a       snapshot-controller-7d9fbc56b8-l826d                         kube-system
	c092281ab72f4       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               About a minute ago       Running             cloud-spanner-emulator                   0                   264893e31d9ac       cloud-spanner-emulator-6f9fcf858b-j9ktg                      default
	9dcaa1d9a58cf       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago       Running             yakd                                     0                   b362663006f36       yakd-dashboard-5ff678cb9-gjr68                               yakd-dashboard
	8649b5d2321a7       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago       Running             registry                                 0                   a8d92b4d08e6d       registry-6b586f9694-tdrzt                                    kube-system
	36ab9635dbc1f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago       Running             minikube-ingress-dns                     0                   9657d3f312df7       kube-ingress-dns-minikube                                    kube-system
	b19635021e0f8       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago       Running             metrics-server                           0                   a02a3a03bb46c       metrics-server-85b7d694d7-qbbqn                              kube-system
	d286de98535c9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago       Running             coredns                                  0                   1f679bfc3737b       coredns-66bc5c9577-q9w79                                     kube-system
	8472deab524cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago       Running             storage-provisioner                      0                   5eb3eb7fcc297       storage-provisioner                                          kube-system
	ac5196f7d4eef       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             2 minutes ago            Running             kindnet-cni                              0                   527f88f17e8df       kindnet-thccv                                                kube-system
	c71a5e39f6a59       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             2 minutes ago            Running             kube-proxy                               0                   0020d30cb9558       kube-proxy-4zrl2                                             kube-system
	cb6c48350e965       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago            Running             kube-scheduler                           0                   a1b754859a9d7       kube-scheduler-addons-050432                                 kube-system
	381d7ec1c72ca       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago            Running             etcd                                     0                   13b85f7258a47       etcd-addons-050432                                           kube-system
	80a3924ff0d87       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago            Running             kube-controller-manager                  0                   0cbc8ced4abd3       kube-controller-manager-addons-050432                        kube-system
	aa9abb8571eaa       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago            Running             kube-apiserver                           0                   57decffa05a5d       kube-apiserver-addons-050432                                 kube-system
	
	
	==> coredns [d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4] <==
	[INFO] 10.244.0.9:38810 - 49476 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.004847677s
	[INFO] 10.244.0.9:54160 - 7146 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000090095s
	[INFO] 10.244.0.9:54160 - 6809 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000127232s
	[INFO] 10.244.0.9:44380 - 20776 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000076047s
	[INFO] 10.244.0.9:44380 - 20487 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000103299s
	[INFO] 10.244.0.9:42227 - 29997 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000073614s
	[INFO] 10.244.0.9:42227 - 30252 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000116562s
	[INFO] 10.244.0.9:57353 - 50344 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114861s
	[INFO] 10.244.0.9:57353 - 50089 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000143276s
	[INFO] 10.244.0.21:51948 - 42832 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000197371s
	[INFO] 10.244.0.21:40487 - 14643 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000253572s
	[INFO] 10.244.0.21:44727 - 2657 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000150419s
	[INFO] 10.244.0.21:47080 - 43540 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000212351s
	[INFO] 10.244.0.21:50225 - 38799 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132209s
	[INFO] 10.244.0.21:37871 - 60435 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000206592s
	[INFO] 10.244.0.21:48452 - 61275 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00328852s
	[INFO] 10.244.0.21:33572 - 57969 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00464107s
	[INFO] 10.244.0.21:59047 - 13750 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.00622253s
	[INFO] 10.244.0.21:59885 - 15957 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006534195s
	[INFO] 10.244.0.21:35569 - 28997 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004884907s
	[INFO] 10.244.0.21:57551 - 42138 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006837135s
	[INFO] 10.244.0.21:44659 - 59464 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003463597s
	[INFO] 10.244.0.21:47174 - 27851 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005677545s
	[INFO] 10.244.0.21:46493 - 20068 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000949664s
	[INFO] 10.244.0.21:42302 - 20657 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001308513s
	
	
	==> describe nodes <==
	Name:               addons-050432
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-050432
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=addons-050432
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_29_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-050432
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-050432"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:29:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-050432
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:31:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:31:28 +0000   Sat, 01 Nov 2025 09:29:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:31:28 +0000   Sat, 01 Nov 2025 09:29:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:31:28 +0000   Sat, 01 Nov 2025 09:29:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:31:28 +0000   Sat, 01 Nov 2025 09:30:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-050432
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                bf32987f-5f0a-4a39-8f48-6b363304d873
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  default                     cloud-spanner-emulator-6f9fcf858b-j9ktg                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  default                     registry-test                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  gadget                      gadget-hcssg                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  gcp-auth                    gcp-auth-78565c9fb4-ll292                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-z8482                     100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m14s
	  kube-system                 amd-gpu-device-plugin-xj8r5                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-q9w79                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m16s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 csi-hostpathplugin-kgt98                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 etcd-addons-050432                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m21s
	  kube-system                 kindnet-thccv                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-addons-050432                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-addons-050432                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-proxy-4zrl2                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-addons-050432                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 metrics-server-85b7d694d7-qbbqn                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2m14s
	  kube-system                 nvidia-device-plugin-daemonset-585vh                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 registry-6b586f9694-tdrzt                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 registry-creds-764b6fb674-8s95r                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 registry-proxy-ftdnb                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 snapshot-controller-7d9fbc56b8-l826d                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 snapshot-controller-7d9fbc56b8-tqzj5                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  local-path-storage          helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-648f6765c9-vwzcp                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-gjr68                                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     2m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m14s  kube-proxy       
	  Normal  Starting                 2m22s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m21s  kubelet          Node addons-050432 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s  kubelet          Node addons-050432 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s  kubelet          Node addons-050432 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m17s  node-controller  Node addons-050432 event: Registered Node addons-050432 in Controller
	  Normal  NodeReady                94s    kubelet          Node addons-050432 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 68 7f d8 2e 62 08 06
	[Nov 1 09:17] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e 3c fb 5f 7b ec 08 06
	[  +0.749824] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da e3 3b e3 16 70 08 06
	[  +0.028622] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae fc 6a ed e6 fb 08 06
	[  +4.640443] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 4a 11 50 9c e7 ff 08 06
	[ +31.111436] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 4e a2 e7 ae 33 76 08 06
	[  +0.655773] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 ce 1d db 29 64 08 06
	[  +0.035724] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 35 31 1e 6d 72 08 06
	[  +5.180949] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 5e 11 6b 5b 97 08 06
	[Nov 1 09:18] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 f7 b1 c2 e5 91 08 06
	[  +1.078447] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 72 48 1b 37 be 2c 08 06
	[  +0.039074] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	
	
	==> etcd [381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd] <==
	{"level":"warn","ts":"2025-11-01T09:29:23.207558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:23.214471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:23.227958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:23.234382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:23.241481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:23.293327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:34.560338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:29:34.568012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:00.696693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:00.703196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:00.718244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:00.734708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55526","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:30:37.358150Z","caller":"traceutil/trace.go:172","msg":"trace[992132178] transaction","detail":"{read_only:false; response_revision:1060; number_of_response:1; }","duration":"122.557838ms","start":"2025-11-01T09:30:37.235565Z","end":"2025-11-01T09:30:37.358123Z","steps":["trace[992132178] 'process raft request'  (duration: 122.504158ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:30:37.358162Z","caller":"traceutil/trace.go:172","msg":"trace[1049890627] transaction","detail":"{read_only:false; response_revision:1059; number_of_response:1; }","duration":"123.783863ms","start":"2025-11-01T09:30:37.234352Z","end":"2025-11-01T09:30:37.358136Z","steps":["trace[1049890627] 'process raft request'  (duration: 123.27207ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:30:43.326242Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.849826ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:30:43.326362Z","caller":"traceutil/trace.go:172","msg":"trace[1309233321] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"104.989508ms","start":"2025-11-01T09:30:43.221354Z","end":"2025-11-01T09:30:43.326343Z","steps":["trace[1309233321] 'agreement among raft nodes before linearized reading'  (duration: 54.782995ms)","trace[1309233321] 'range keys from in-memory index tree'  (duration: 50.029059ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:30:43.326359Z","caller":"traceutil/trace.go:172","msg":"trace[138011217] transaction","detail":"{read_only:false; response_revision:1106; number_of_response:1; }","duration":"144.66617ms","start":"2025-11-01T09:30:43.181682Z","end":"2025-11-01T09:30:43.326348Z","steps":["trace[138011217] 'process raft request'  (duration: 94.495371ms)","trace[138011217] 'compare'  (duration: 49.980206ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:30:43.326506Z","caller":"traceutil/trace.go:172","msg":"trace[915082310] transaction","detail":"{read_only:false; response_revision:1107; number_of_response:1; }","duration":"138.930428ms","start":"2025-11-01T09:30:43.187552Z","end":"2025-11-01T09:30:43.326482Z","steps":["trace[915082310] 'process raft request'  (duration: 138.738529ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:30:43.486253Z","caller":"traceutil/trace.go:172","msg":"trace[386791505] linearizableReadLoop","detail":"{readStateIndex:1141; appliedIndex:1141; }","duration":"125.441591ms","start":"2025-11-01T09:30:43.360777Z","end":"2025-11-01T09:30:43.486218Z","steps":["trace[386791505] 'read index received'  (duration: 125.432048ms)","trace[386791505] 'applied index is now lower than readState.Index'  (duration: 8.575µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:30:43.507734Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.932645ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:30:43.507821Z","caller":"traceutil/trace.go:172","msg":"trace[1045497463] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1107; }","duration":"147.029855ms","start":"2025-11-01T09:30:43.360772Z","end":"2025-11-01T09:30:43.507802Z","steps":["trace[1045497463] 'agreement among raft nodes before linearized reading'  (duration: 125.5481ms)","trace[1045497463] 'range keys from in-memory index tree'  (duration: 21.355985ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:30:43.507914Z","caller":"traceutil/trace.go:172","msg":"trace[2105378846] transaction","detail":"{read_only:false; response_revision:1108; number_of_response:1; }","duration":"177.304009ms","start":"2025-11-01T09:30:43.330597Z","end":"2025-11-01T09:30:43.507901Z","steps":["trace[2105378846] 'process raft request'  (duration: 155.654302ms)","trace[2105378846] 'compare'  (duration: 21.500748ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:30:43.566910Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.897318ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:30:43.566982Z","caller":"traceutil/trace.go:172","msg":"trace[127506552] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1108; }","duration":"162.986628ms","start":"2025-11-01T09:30:43.403982Z","end":"2025-11-01T09:30:43.566969Z","steps":["trace[127506552] 'agreement among raft nodes before linearized reading'  (duration: 162.818491ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:30:43.567098Z","caller":"traceutil/trace.go:172","msg":"trace[308816269] transaction","detail":"{read_only:false; response_revision:1109; number_of_response:1; }","duration":"234.286956ms","start":"2025-11-01T09:30:43.332794Z","end":"2025-11-01T09:30:43.567081Z","steps":["trace[308816269] 'process raft request'  (duration: 234.177285ms)"],"step_count":1}
	
	
	==> gcp-auth [5ef835cd52f219e26ae9cd94a356a41e4d8a412a0cc14ffe5f5e2e93827e82b5] <==
	2025/11/01 09:30:55 GCP Auth Webhook started!
	2025/11/01 09:31:25 Ready to marshal response ...
	2025/11/01 09:31:25 Ready to write response ...
	2025/11/01 09:31:25 Ready to marshal response ...
	2025/11/01 09:31:25 Ready to write response ...
	2025/11/01 09:31:25 Ready to marshal response ...
	2025/11/01 09:31:25 Ready to write response ...
	2025/11/01 09:31:35 Ready to marshal response ...
	2025/11/01 09:31:35 Ready to write response ...
	2025/11/01 09:31:35 Ready to marshal response ...
	2025/11/01 09:31:35 Ready to write response ...
	2025/11/01 09:31:45 Ready to marshal response ...
	2025/11/01 09:31:45 Ready to write response ...
	2025/11/01 09:31:45 Ready to marshal response ...
	2025/11/01 09:31:45 Ready to write response ...
	
	
	==> kernel <==
	 09:31:47 up  2:14,  0 user,  load average: 0.98, 1.46, 15.58
	Linux addons-050432 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120] <==
	E1101 09:30:02.885148       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:30:02.885169       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 09:30:04.485450       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:30:04.485487       1 metrics.go:72] Registering metrics
	I1101 09:30:04.485579       1 controller.go:711] "Syncing nftables rules"
	I1101 09:30:12.884088       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:30:12.884138       1 main.go:301] handling current node
	I1101 09:30:22.884136       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:30:22.884189       1 main.go:301] handling current node
	I1101 09:30:32.884387       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:30:32.884432       1 main.go:301] handling current node
	I1101 09:30:42.884328       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:30:42.884403       1 main.go:301] handling current node
	I1101 09:30:52.884372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:30:52.884406       1 main.go:301] handling current node
	I1101 09:31:02.883924       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:31:02.883975       1 main.go:301] handling current node
	I1101 09:31:12.883678       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:31:12.883730       1 main.go:301] handling current node
	I1101 09:31:22.886042       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:31:22.886075       1 main.go:301] handling current node
	I1101 09:31:32.883410       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:31:32.883446       1 main.go:301] handling current node
	I1101 09:31:42.884222       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:31:42.884260       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b] <==
	I1101 09:29:40.342718       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.105.213.38"}
	W1101 09:30:00.696662       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:30:00.703115       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:30:00.718204       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:30:00.727159       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 09:30:13.074688       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.213.38:443: connect: connection refused
	W1101 09:30:13.074739       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.213.38:443: connect: connection refused
	E1101 09:30:13.074738       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.213.38:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:13.074769       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.213.38:443: connect: connection refused" logger="UnhandledError"
	W1101 09:30:13.096425       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.213.38:443: connect: connection refused
	E1101 09:30:13.096474       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.213.38:443: connect: connection refused" logger="UnhandledError"
	W1101 09:30:13.101705       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.213.38:443: connect: connection refused
	E1101 09:30:13.101816       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.213.38:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:16.074479       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.214.173:443: connect: connection refused" logger="UnhandledError"
	W1101 09:30:16.074509       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 09:30:16.074581       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 09:30:16.074821       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.214.173:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:16.080816       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.214.173:443: connect: connection refused" logger="UnhandledError"
	E1101 09:30:16.101567       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.214.173:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.214.173:443: connect: connection refused" logger="UnhandledError"
	I1101 09:30:16.176726       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 09:31:34.585216       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40886: use of closed network connection
	E1101 09:31:34.742768       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40910: use of closed network connection
	
	
	==> kube-controller-manager [80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853] <==
	I1101 09:29:30.678641       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:29:30.680003       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:29:30.680040       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:29:30.680096       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:29:30.680119       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:29:30.680180       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:29:30.680247       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:29:30.680281       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:29:30.680456       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:29:30.680896       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:29:30.680902       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:29:30.682087       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:29:30.684098       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:29:30.685203       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:29:30.686368       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:29:30.700874       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 09:29:33.403278       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1101 09:30:00.689275       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:30:00.689510       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 09:30:00.689587       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 09:30:00.708568       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 09:30:00.712576       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 09:30:00.790248       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:30:00.812704       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:30:15.634427       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90] <==
	I1101 09:29:32.441575       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:29:32.693991       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:29:32.799217       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:29:32.799259       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:29:32.799380       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:29:33.100213       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:29:33.100313       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:29:33.142254       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:29:33.142815       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:29:33.142979       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:29:33.145119       1 config.go:200] "Starting service config controller"
	I1101 09:29:33.145176       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:29:33.145234       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:29:33.145257       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:29:33.145323       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:29:33.145350       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:29:33.147655       1 config.go:309] "Starting node config controller"
	I1101 09:29:33.147712       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:29:33.256872       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:29:33.257062       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:29:33.257079       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:29:33.257112       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043] <==
	E1101 09:29:23.704706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:29:23.704761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:29:23.704767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:29:23.704814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:29:23.704820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:29:23.704870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:29:23.704903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:29:23.704928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:29:23.704905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:29:23.704978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:29:23.705012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:29:23.705058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:29:23.705063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:29:23.704564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:29:23.705116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:29:24.560210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:29:24.609948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:29:24.660255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:29:24.703974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:29:24.708169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:29:24.816712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:29:24.929254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:29:24.943432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:29:24.950572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1101 09:29:25.302264       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:31:40 addons-050432 kubelet[1284]: I1101 09:31:40.424000    1284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e641c06566fc8a0861b026d53d1e8cae9c2ae4efda4f507cc429efaa30dd882"
	Nov 01 09:31:40 addons-050432 kubelet[1284]: E1101 09:31:40.425564    1284 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-create-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5\" is forbidden: User \"system:node:addons-050432\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-050432' and this object" podUID="19f29515-1b23-44f6-aa6e-1791b6b4346e" pod="local-path-storage/helper-pod-create-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5"
	Nov 01 09:31:41 addons-050432 kubelet[1284]: E1101 09:31:41.271459    1284 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-create-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5\" is forbidden: User \"system:node:addons-050432\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-050432' and this object" podUID="19f29515-1b23-44f6-aa6e-1791b6b4346e" pod="local-path-storage/helper-pod-create-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5"
	Nov 01 09:31:41 addons-050432 kubelet[1284]: I1101 09:31:41.310404    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsr2n\" (UniqueName: \"kubernetes.io/projected/026e0689-ffdb-41a6-af29-6c9f71bfdf56-kube-api-access-dsr2n\") pod \"test-local-path\" (UID: \"026e0689-ffdb-41a6-af29-6c9f71bfdf56\") " pod="default/test-local-path"
	Nov 01 09:31:41 addons-050432 kubelet[1284]: I1101 09:31:41.310468    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/026e0689-ffdb-41a6-af29-6c9f71bfdf56-gcp-creds\") pod \"test-local-path\" (UID: \"026e0689-ffdb-41a6-af29-6c9f71bfdf56\") " pod="default/test-local-path"
	Nov 01 09:31:41 addons-050432 kubelet[1284]: I1101 09:31:41.310506    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-611c46b8-835f-4e6f-b58e-711be421d3e5\" (UniqueName: \"kubernetes.io/host-path/026e0689-ffdb-41a6-af29-6c9f71bfdf56-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5\") pod \"test-local-path\" (UID: \"026e0689-ffdb-41a6-af29-6c9f71bfdf56\") " pod="default/test-local-path"
	Nov 01 09:31:41 addons-050432 kubelet[1284]: I1101 09:31:41.902933    1284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19f29515-1b23-44f6-aa6e-1791b6b4346e" path="/var/lib/kubelet/pods/19f29515-1b23-44f6-aa6e-1791b6b4346e/volumes"
	Nov 01 09:31:44 addons-050432 kubelet[1284]: I1101 09:31:44.536228    1284 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/026e0689-ffdb-41a6-af29-6c9f71bfdf56-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5\") pod \"026e0689-ffdb-41a6-af29-6c9f71bfdf56\" (UID: \"026e0689-ffdb-41a6-af29-6c9f71bfdf56\") "
	Nov 01 09:31:44 addons-050432 kubelet[1284]: I1101 09:31:44.536304    1284 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsr2n\" (UniqueName: \"kubernetes.io/projected/026e0689-ffdb-41a6-af29-6c9f71bfdf56-kube-api-access-dsr2n\") pod \"026e0689-ffdb-41a6-af29-6c9f71bfdf56\" (UID: \"026e0689-ffdb-41a6-af29-6c9f71bfdf56\") "
	Nov 01 09:31:44 addons-050432 kubelet[1284]: I1101 09:31:44.536353    1284 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/026e0689-ffdb-41a6-af29-6c9f71bfdf56-gcp-creds\") pod \"026e0689-ffdb-41a6-af29-6c9f71bfdf56\" (UID: \"026e0689-ffdb-41a6-af29-6c9f71bfdf56\") "
	Nov 01 09:31:44 addons-050432 kubelet[1284]: I1101 09:31:44.536372    1284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/026e0689-ffdb-41a6-af29-6c9f71bfdf56-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5" (OuterVolumeSpecName: "data") pod "026e0689-ffdb-41a6-af29-6c9f71bfdf56" (UID: "026e0689-ffdb-41a6-af29-6c9f71bfdf56"). InnerVolumeSpecName "pvc-611c46b8-835f-4e6f-b58e-711be421d3e5". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 09:31:44 addons-050432 kubelet[1284]: I1101 09:31:44.536442    1284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/026e0689-ffdb-41a6-af29-6c9f71bfdf56-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "026e0689-ffdb-41a6-af29-6c9f71bfdf56" (UID: "026e0689-ffdb-41a6-af29-6c9f71bfdf56"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 09:31:44 addons-050432 kubelet[1284]: I1101 09:31:44.536536    1284 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/026e0689-ffdb-41a6-af29-6c9f71bfdf56-gcp-creds\") on node \"addons-050432\" DevicePath \"\""
	Nov 01 09:31:44 addons-050432 kubelet[1284]: I1101 09:31:44.536554    1284 reconciler_common.go:299] "Volume detached for volume \"pvc-611c46b8-835f-4e6f-b58e-711be421d3e5\" (UniqueName: \"kubernetes.io/host-path/026e0689-ffdb-41a6-af29-6c9f71bfdf56-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5\") on node \"addons-050432\" DevicePath \"\""
	Nov 01 09:31:44 addons-050432 kubelet[1284]: I1101 09:31:44.538682    1284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026e0689-ffdb-41a6-af29-6c9f71bfdf56-kube-api-access-dsr2n" (OuterVolumeSpecName: "kube-api-access-dsr2n") pod "026e0689-ffdb-41a6-af29-6c9f71bfdf56" (UID: "026e0689-ffdb-41a6-af29-6c9f71bfdf56"). InnerVolumeSpecName "kube-api-access-dsr2n". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 09:31:44 addons-050432 kubelet[1284]: I1101 09:31:44.637799    1284 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dsr2n\" (UniqueName: \"kubernetes.io/projected/026e0689-ffdb-41a6-af29-6c9f71bfdf56-kube-api-access-dsr2n\") on node \"addons-050432\" DevicePath \"\""
	Nov 01 09:31:44 addons-050432 kubelet[1284]: I1101 09:31:44.901353    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xj8r5" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:31:45 addons-050432 kubelet[1284]: I1101 09:31:45.243243    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh9zg\" (UniqueName: \"kubernetes.io/projected/931ae887-0378-46e7-b98b-e7c3b4b4d0da-kube-api-access-rh9zg\") pod \"registry-test\" (UID: \"931ae887-0378-46e7-b98b-e7c3b4b4d0da\") " pod="default/registry-test"
	Nov 01 09:31:45 addons-050432 kubelet[1284]: I1101 09:31:45.243302    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/931ae887-0378-46e7-b98b-e7c3b4b4d0da-gcp-creds\") pod \"registry-test\" (UID: \"931ae887-0378-46e7-b98b-e7c3b4b4d0da\") " pod="default/registry-test"
	Nov 01 09:31:45 addons-050432 kubelet[1284]: I1101 09:31:45.452487    1284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8c2dd1905e07bf2a569b7336845ff499d2de9a62acee3155f67ccfbde352ee7"
	Nov 01 09:31:45 addons-050432 kubelet[1284]: I1101 09:31:45.903115    1284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="026e0689-ffdb-41a6-af29-6c9f71bfdf56" path="/var/lib/kubelet/pods/026e0689-ffdb-41a6-af29-6c9f71bfdf56/volumes"
	Nov 01 09:31:45 addons-050432 kubelet[1284]: I1101 09:31:45.950592    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/9b332241-9209-4a66-9c90-1a2c3b91deee-script\") pod \"helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5\" (UID: \"9b332241-9209-4a66-9c90-1a2c3b91deee\") " pod="local-path-storage/helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5"
	Nov 01 09:31:45 addons-050432 kubelet[1284]: I1101 09:31:45.950634    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtzdb\" (UniqueName: \"kubernetes.io/projected/9b332241-9209-4a66-9c90-1a2c3b91deee-kube-api-access-gtzdb\") pod \"helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5\" (UID: \"9b332241-9209-4a66-9c90-1a2c3b91deee\") " pod="local-path-storage/helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5"
	Nov 01 09:31:45 addons-050432 kubelet[1284]: I1101 09:31:45.950716    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9b332241-9209-4a66-9c90-1a2c3b91deee-gcp-creds\") pod \"helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5\" (UID: \"9b332241-9209-4a66-9c90-1a2c3b91deee\") " pod="local-path-storage/helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5"
	Nov 01 09:31:45 addons-050432 kubelet[1284]: I1101 09:31:45.950757    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/9b332241-9209-4a66-9c90-1a2c3b91deee-data\") pod \"helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5\" (UID: \"9b332241-9209-4a66-9c90-1a2c3b91deee\") " pod="local-path-storage/helper-pod-delete-pvc-611c46b8-835f-4e6f-b58e-711be421d3e5"
	
	
	==> storage-provisioner [8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7] <==
	W1101 09:31:21.873043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:23.876327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:23.879956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:25.883730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:25.889049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:27.891989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:27.898066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:29.901518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:29.905621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:31.909297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:31.913190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:33.916616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:33.920595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:35.923351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:35.928404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:37.931544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:37.935388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:39.938815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:39.943592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:41.945997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:41.951464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:43.954929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:43.959747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:45.963176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:31:45.967540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-050432 -n addons-050432
helpers_test.go:269: (dbg) Run:  kubectl --context addons-050432 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: registry-test ingress-nginx-admission-create-6l9tg ingress-nginx-admission-patch-8r4w5 registry-creds-764b6fb674-8s95r
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-050432 describe pod registry-test ingress-nginx-admission-create-6l9tg ingress-nginx-admission-patch-8r4w5 registry-creds-764b6fb674-8s95r
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-050432 describe pod registry-test ingress-nginx-admission-create-6l9tg ingress-nginx-admission-patch-8r4w5 registry-creds-764b6fb674-8s95r: exit status 1 (72.188812ms)

                                                
                                                
-- stdout --
	Name:             registry-test
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-050432/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 09:31:45 +0000
	Labels:           run=registry-test
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  registry-test:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Args:
	      sh
	      -c
	      wget --spider -S http://registry.kube-system.svc.cluster.local
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rh9zg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rh9zg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/registry-test to addons-050432
	  Normal  Pulling    3s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox"
	  Normal  Pulled     0s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox" in 2.534s (2.534s including waiting). Image size: 1462480 bytes.
	  Normal  Created    0s    kubelet            Created container: registry-test
	  Normal  Started    0s    kubelet            Started container registry-test

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6l9tg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8r4w5" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-8s95r" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-050432 describe pod registry-test ingress-nginx-admission-create-6l9tg ingress-nginx-admission-patch-8r4w5 registry-creds-764b6fb674-8s95r: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable headlamp --alsologtostderr -v=1: exit status 11 (262.791279ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:48.226047  529364 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:48.226362  529364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:48.226374  529364 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:48.226378  529364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:48.226664  529364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:31:48.227042  529364 mustload.go:66] Loading cluster: addons-050432
	I1101 09:31:48.227409  529364 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:48.227435  529364 addons.go:607] checking whether the cluster is paused
	I1101 09:31:48.227552  529364 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:48.227575  529364 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:31:48.227995  529364 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:31:48.249058  529364 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:48.249116  529364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:31:48.270063  529364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:31:48.373101  529364 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:48.373221  529364 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:48.404338  529364 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:31:48.404370  529364 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:31:48.404374  529364 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:31:48.404377  529364 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:31:48.404379  529364 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:31:48.404384  529364 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:31:48.404387  529364 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:31:48.404389  529364 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:31:48.404393  529364 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:31:48.404407  529364 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:31:48.404415  529364 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:31:48.404419  529364 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:31:48.404423  529364 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:31:48.404426  529364 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:31:48.404429  529364 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:31:48.404440  529364 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:31:48.404447  529364 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:31:48.404454  529364 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:31:48.404457  529364 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:31:48.404459  529364 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:31:48.404462  529364 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:31:48.404464  529364 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:31:48.404466  529364 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:31:48.404469  529364 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:31:48.404471  529364 cri.go:89] found id: ""
	I1101 09:31:48.404539  529364 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:48.419326  529364 out.go:203] 
	W1101 09:31:48.420374  529364 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:48.420393  529364 out.go:285] * 
	* 
	W1101 09:31:48.423430  529364 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:48.424483  529364 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.79s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-j9ktg" [197e9849-ccb2-4945-81ca-0c3419215769] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004137757s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (304.881004ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:51.310467  530018 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:51.310832  530018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:51.310879  530018 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:51.310887  530018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:51.311244  530018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:31:51.311638  530018 mustload.go:66] Loading cluster: addons-050432
	I1101 09:31:51.312176  530018 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:51.312205  530018 addons.go:607] checking whether the cluster is paused
	I1101 09:31:51.312340  530018 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:51.312363  530018 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:31:51.312988  530018 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:31:51.337741  530018 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:51.337810  530018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:31:51.362803  530018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:31:51.471494  530018 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:51.471585  530018 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:51.505857  530018 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:31:51.505885  530018 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:31:51.505891  530018 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:31:51.505895  530018 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:31:51.505899  530018 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:31:51.505903  530018 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:31:51.505907  530018 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:31:51.505911  530018 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:31:51.505915  530018 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:31:51.505923  530018 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:31:51.505927  530018 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:31:51.505932  530018 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:31:51.505936  530018 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:31:51.505941  530018 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:31:51.505945  530018 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:31:51.505958  530018 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:31:51.505967  530018 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:31:51.505973  530018 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:31:51.505978  530018 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:31:51.505982  530018 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:31:51.505987  530018 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:31:51.505991  530018 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:31:51.505994  530018 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:31:51.505997  530018 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:31:51.505999  530018 cri.go:89] found id: ""
	I1101 09:31:51.506048  530018 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:51.522048  530018 out.go:203] 
	W1101 09:31:51.522986  530018 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:51.523006  530018 out.go:285] * 
	* 
	W1101 09:31:51.526066  530018 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:51.527349  530018 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.32s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.21s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-050432 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-050432 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-050432 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [026e0689-ffdb-41a6-af29-6c9f71bfdf56] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [026e0689-ffdb-41a6-af29-6c9f71bfdf56] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [026e0689-ffdb-41a6-af29-6c9f71bfdf56] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003996785s
addons_test.go:967: (dbg) Run:  kubectl --context addons-050432 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 ssh "cat /opt/local-path-provisioner/pvc-611c46b8-835f-4e6f-b58e-711be421d3e5_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-050432 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-050432 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (284.270799ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:46.002938  528560 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:46.003069  528560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:46.003081  528560 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:46.003087  528560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:46.003419  528560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:31:46.003774  528560 mustload.go:66] Loading cluster: addons-050432
	I1101 09:31:46.004196  528560 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:46.004224  528560 addons.go:607] checking whether the cluster is paused
	I1101 09:31:46.004334  528560 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:46.004365  528560 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:31:46.004829  528560 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:31:46.022904  528560 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:46.022979  528560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:31:46.044940  528560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:31:46.150080  528560 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:46.150157  528560 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:46.184918  528560 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:31:46.184967  528560 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:31:46.184975  528560 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:31:46.184982  528560 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:31:46.184987  528560 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:31:46.184995  528560 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:31:46.185000  528560 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:31:46.185005  528560 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:31:46.185010  528560 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:31:46.185041  528560 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:31:46.185051  528560 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:31:46.185060  528560 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:31:46.185068  528560 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:31:46.185073  528560 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:31:46.185082  528560 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:31:46.185102  528560 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:31:46.185114  528560 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:31:46.185122  528560 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:31:46.185127  528560 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:31:46.185156  528560 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:31:46.185170  528560 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:31:46.185193  528560 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:31:46.185198  528560 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:31:46.185203  528560 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:31:46.185207  528560 cri.go:89] found id: ""
	I1101 09:31:46.185282  528560 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:46.202380  528560 out.go:203] 
	W1101 09:31:46.203657  528560 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:46.203683  528560 out.go:285] * 
	* 
	W1101 09:31:46.207922  528560 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:46.209363  528560 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (11.21s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-585vh" [a77cc1f1-85cb-4703-a429-f8b4eb535dfc] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004279394s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (266.792525ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:40.073760  527929 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:40.074080  527929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:40.074091  527929 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:40.074095  527929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:40.074344  527929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:31:40.074682  527929 mustload.go:66] Loading cluster: addons-050432
	I1101 09:31:40.075090  527929 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:40.075111  527929 addons.go:607] checking whether the cluster is paused
	I1101 09:31:40.075216  527929 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:40.075244  527929 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:31:40.075700  527929 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:31:40.094699  527929 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:40.094758  527929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:31:40.112868  527929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:31:40.213126  527929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:40.213216  527929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:40.247180  527929 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:31:40.247219  527929 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:31:40.247223  527929 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:31:40.247228  527929 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:31:40.247230  527929 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:31:40.247234  527929 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:31:40.247237  527929 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:31:40.247240  527929 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:31:40.247243  527929 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:31:40.247266  527929 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:31:40.247277  527929 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:31:40.247282  527929 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:31:40.247286  527929 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:31:40.247291  527929 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:31:40.247295  527929 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:31:40.247305  527929 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:31:40.247310  527929 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:31:40.247314  527929 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:31:40.247317  527929 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:31:40.247319  527929 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:31:40.247322  527929 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:31:40.247324  527929 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:31:40.247326  527929 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:31:40.247330  527929 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:31:40.247332  527929 cri.go:89] found id: ""
	I1101 09:31:40.247393  527929 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:40.266271  527929 out.go:203] 
	W1101 09:31:40.267403  527929 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:40.267435  527929 out.go:285] * 
	* 
	W1101 09:31:40.270697  527929 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:40.272078  527929 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-gjr68" [270a0c99-c7ab-41b6-99af-4f073b85303c] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004271843s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable yakd --alsologtostderr -v=1: exit status 11 (266.870324ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:40.075620  527928 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:40.075928  527928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:40.075938  527928 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:40.075941  527928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:40.076155  527928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:31:40.076436  527928 mustload.go:66] Loading cluster: addons-050432
	I1101 09:31:40.076817  527928 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:40.076849  527928 addons.go:607] checking whether the cluster is paused
	I1101 09:31:40.076951  527928 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:40.076975  527928 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:31:40.077395  527928 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:31:40.096538  527928 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:40.096593  527928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:31:40.115031  527928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:31:40.213912  527928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:40.214009  527928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:40.247528  527928 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:31:40.247568  527928 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:31:40.247574  527928 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:31:40.247579  527928 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:31:40.247587  527928 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:31:40.247592  527928 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:31:40.247599  527928 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:31:40.247601  527928 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:31:40.247603  527928 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:31:40.247609  527928 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:31:40.247614  527928 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:31:40.247617  527928 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:31:40.247620  527928 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:31:40.247627  527928 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:31:40.247640  527928 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:31:40.247650  527928 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:31:40.247657  527928 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:31:40.247662  527928 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:31:40.247666  527928 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:31:40.247669  527928 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:31:40.247672  527928 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:31:40.247676  527928 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:31:40.247680  527928 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:31:40.247684  527928 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:31:40.247688  527928 cri.go:89] found id: ""
	I1101 09:31:40.247733  527928 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:40.266274  527928 out.go:203] 
	W1101 09:31:40.267393  527928 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:40.267426  527928 out.go:285] * 
	* 
	W1101 09:31:40.271001  527928 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:40.272078  527928 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.27s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-xj8r5" [faddc6aa-a08b-49f8-a58f-73afc131c1a3] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.004079189s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-050432 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-050432 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (300.265643ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:31:51.652153  530184 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:31:51.652496  530184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:51.652510  530184 out.go:374] Setting ErrFile to fd 2...
	I1101 09:31:51.652516  530184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:31:51.652806  530184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:31:51.653260  530184 mustload.go:66] Loading cluster: addons-050432
	I1101 09:31:51.653740  530184 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:51.653763  530184 addons.go:607] checking whether the cluster is paused
	I1101 09:31:51.653908  530184 config.go:182] Loaded profile config "addons-050432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:31:51.653931  530184 host.go:66] Checking if "addons-050432" exists ...
	I1101 09:31:51.654506  530184 cli_runner.go:164] Run: docker container inspect addons-050432 --format={{.State.Status}}
	I1101 09:31:51.676512  530184 ssh_runner.go:195] Run: systemctl --version
	I1101 09:31:51.676577  530184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-050432
	I1101 09:31:51.699720  530184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/addons-050432/id_rsa Username:docker}
	I1101 09:31:51.812263  530184 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:31:51.812346  530184 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:31:51.849955  530184 cri.go:89] found id: "0cd2226cd22ce9ac9f0baeb6ea41e148f8f010b281d358122d1b8f72e061fd09"
	I1101 09:31:51.849979  530184 cri.go:89] found id: "ebc6c01c90c2ffef7b5ae39c5c0ecde8bada6424136d656c03f0e416fbf7638f"
	I1101 09:31:51.849983  530184 cri.go:89] found id: "81c14cf7ac31fd4deac014f8cc58073643620b2bff8afeda53624406507e50fd"
	I1101 09:31:51.849986  530184 cri.go:89] found id: "ba4952c9861dab1e064fb2d2a3f1bb9cc4772f9b0f13448686dd498e8c7407aa"
	I1101 09:31:51.849989  530184 cri.go:89] found id: "f18ba15647b794853433daf79b334ab349ebe730ef67632a558f0c6394c24c3c"
	I1101 09:31:51.849992  530184 cri.go:89] found id: "b24762f9cf57c9414e38b4d1104efdf86412768a3dda4d62163f0d2905b90066"
	I1101 09:31:51.849995  530184 cri.go:89] found id: "43b485de84b03f8e5b77af81c9ba7f0ddff86cefe7466bce2129c26456bc50c4"
	I1101 09:31:51.849998  530184 cri.go:89] found id: "1b71e4eeb4433351951e6788666fe18c4a249f639d3255b57ac57b6855df1cdb"
	I1101 09:31:51.850000  530184 cri.go:89] found id: "c4071d2f7fecc51ee3ab6b5a41eb1b3dc496f3f3228ffb095dca48b2fd1da674"
	I1101 09:31:51.850006  530184 cri.go:89] found id: "c19b6a74eec58eb01bebb7a4d9b8856189edace001cfbcaae74a5f9265aa53d4"
	I1101 09:31:51.850009  530184 cri.go:89] found id: "47018dafba3284bb465416642a69832fd0636df4c45ac3d6dff2df4709d6830c"
	I1101 09:31:51.850012  530184 cri.go:89] found id: "39e74546adc34b09d043b3fe42cf0589e32113817d1eb82f87311b9fd92a3116"
	I1101 09:31:51.850014  530184 cri.go:89] found id: "c898b96b19d0d8fb5319316dfb9fea48b91b7b6cd07aebf74b451cbb3b171197"
	I1101 09:31:51.850017  530184 cri.go:89] found id: "8649b5d2321a7d67ade1ec0d53d3d1fba70f616835ceed2643b8f2ef020b7fa3"
	I1101 09:31:51.850029  530184 cri.go:89] found id: "36ab9635dbc1f6b55edceeef1c7f4a770a2d9d4225aebd2ffa24bf91d552b108"
	I1101 09:31:51.850033  530184 cri.go:89] found id: "b19635021e0f8e7ce2ec7a67abde4e7bc870a9b2fae7b48491f2753d2ca1a0eb"
	I1101 09:31:51.850036  530184 cri.go:89] found id: "d286de98535c9ae141dedb8f99fac9471b6794fa26b4b4da4c9ba958931b8cc4"
	I1101 09:31:51.850040  530184 cri.go:89] found id: "8472deab524cb8c088fba03c3ad1b293da9bdc2171b0b46d548b79bbb54bf7a7"
	I1101 09:31:51.850043  530184 cri.go:89] found id: "ac5196f7d4eef01cf0419b3f3829bd579c47a0c565aa0607c2bdb7a70f137120"
	I1101 09:31:51.850045  530184 cri.go:89] found id: "c71a5e39f6a5964ca7d470adbf2e35054095d8b12238018aaaf37d18638d0e90"
	I1101 09:31:51.850048  530184 cri.go:89] found id: "cb6c48350e96535630fbced231218dca77f266876b669d076dd201e121a81043"
	I1101 09:31:51.850050  530184 cri.go:89] found id: "381d7ec1c72ca64bd07ed3c621c70b06e39c1e0da6d6a44421e8d851ff2d5ebd"
	I1101 09:31:51.850052  530184 cri.go:89] found id: "80a3924ff0d8717ae17b9c6a086dfb2828ebdeea64e4932144b8c56385af3853"
	I1101 09:31:51.850056  530184 cri.go:89] found id: "aa9abb8571eaa19c7e1b263068276d99b55fbd7021e8cbf16ceaef8b6267789b"
	I1101 09:31:51.850059  530184 cri.go:89] found id: ""
	I1101 09:31:51.850096  530184 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:31:51.866968  530184 out.go:203] 
	W1101 09:31:51.868453  530184 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:31:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:31:51.868481  530184 out.go:285] * 
	* 
	W1101 09:31:51.872822  530184 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:31:51.873980  530184 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-050432 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-593346 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-593346 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-qqnvc" [7c53dd94-dc07-41b5-9a2e-24866f379988] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-593346 -n functional-593346
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-01 09:47:10.241209989 +0000 UTC m=+1137.947110530
functional_test.go:1645: (dbg) Run:  kubectl --context functional-593346 describe po hello-node-connect-7d85dfc575-qqnvc -n default
functional_test.go:1645: (dbg) kubectl --context functional-593346 describe po hello-node-connect-7d85dfc575-qqnvc -n default:
Name:             hello-node-connect-7d85dfc575-qqnvc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-593346/192.168.49.2
Start Time:       Sat, 01 Nov 2025 09:37:09 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ncbww (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ncbww:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qqnvc to functional-593346
Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-593346 logs hello-node-connect-7d85dfc575-qqnvc -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-593346 logs hello-node-connect-7d85dfc575-qqnvc -n default: exit status 1 (73.031795ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-qqnvc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-593346 logs hello-node-connect-7d85dfc575-qqnvc -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-593346 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-qqnvc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-593346/192.168.49.2
Start Time:       Sat, 01 Nov 2025 09:37:09 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ncbww (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ncbww:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qqnvc to functional-593346
Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-593346 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-593346 logs -l app=hello-node-connect: exit status 1 (65.108312ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-qqnvc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-593346 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-593346 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.123.242
IPs:                      10.99.123.242
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32458/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-593346
helpers_test.go:243: (dbg) docker inspect functional-593346:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c9690bb65cb08e93c7dd17aaecd1c941778792afbba7d274055141f8e109db5",
	        "Created": "2025-11-01T09:35:27.422244218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 541990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:35:27.45315887Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/3c9690bb65cb08e93c7dd17aaecd1c941778792afbba7d274055141f8e109db5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c9690bb65cb08e93c7dd17aaecd1c941778792afbba7d274055141f8e109db5/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c9690bb65cb08e93c7dd17aaecd1c941778792afbba7d274055141f8e109db5/hosts",
	        "LogPath": "/var/lib/docker/containers/3c9690bb65cb08e93c7dd17aaecd1c941778792afbba7d274055141f8e109db5/3c9690bb65cb08e93c7dd17aaecd1c941778792afbba7d274055141f8e109db5-json.log",
	        "Name": "/functional-593346",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-593346:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-593346",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c9690bb65cb08e93c7dd17aaecd1c941778792afbba7d274055141f8e109db5",
	                "LowerDir": "/var/lib/docker/overlay2/57479efbea91d44b1488fbf733e7348f61011324894cf224ead04f7e054c747f-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/57479efbea91d44b1488fbf733e7348f61011324894cf224ead04f7e054c747f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/57479efbea91d44b1488fbf733e7348f61011324894cf224ead04f7e054c747f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/57479efbea91d44b1488fbf733e7348f61011324894cf224ead04f7e054c747f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-593346",
	                "Source": "/var/lib/docker/volumes/functional-593346/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-593346",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-593346",
	                "name.minikube.sigs.k8s.io": "functional-593346",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c7039d866a36de4d79f660e932aa437c1b510847a544dc41366a4defb70ae96e",
	            "SandboxKey": "/var/run/docker/netns/c7039d866a36",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-593346": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:6c:8b:17:3f:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2cd8807ab04db9b7876da9e4a3fc5a73ae44c68d2f92181d17291af1fe9d5c87",
	                    "EndpointID": "1ce699b3cc107bceca465375f5c933b6e528eff2e3df70adf0866e85e416a2aa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-593346",
	                        "3c9690bb65cb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-593346 -n functional-593346
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-593346 logs -n 25: (1.361172641s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-593346 image ls                                                                                                                                      │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ dashboard      │ --url --port 36195 -p functional-593346 --alsologtostderr -v=1                                                                                                  │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ image          │ functional-593346 image load --daemon kicbase/echo-server:functional-593346 --alsologtostderr                                                                   │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ image          │ functional-593346 image ls                                                                                                                                      │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ image          │ functional-593346 image save kicbase/echo-server:functional-593346 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ image          │ functional-593346 image rm kicbase/echo-server:functional-593346 --alsologtostderr                                                                              │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ image          │ functional-593346 image ls                                                                                                                                      │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ image          │ functional-593346 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ image          │ functional-593346 image save --daemon kicbase/echo-server:functional-593346 --alsologtostderr                                                                   │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ cp             │ functional-593346 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                              │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ ssh            │ functional-593346 ssh -n functional-593346 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ cp             │ functional-593346 cp functional-593346:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3691132561/001/cp-test.txt                                      │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ ssh            │ functional-593346 ssh -n functional-593346 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ cp             │ functional-593346 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                       │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ ssh            │ functional-593346 ssh -n functional-593346 sudo cat /tmp/does/not/exist/cp-test.txt                                                                             │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ image          │ functional-593346 image ls --format short --alsologtostderr                                                                                                     │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ image          │ functional-593346 image ls --format yaml --alsologtostderr                                                                                                      │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ ssh            │ functional-593346 ssh pgrep buildkitd                                                                                                                           │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │                     │
	│ image          │ functional-593346 image build -t localhost/my-image:functional-593346 testdata/build --alsologtostderr                                                          │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ image          │ functional-593346 image ls --format json --alsologtostderr                                                                                                      │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ image          │ functional-593346 image ls --format table --alsologtostderr                                                                                                     │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ update-context │ functional-593346 update-context --alsologtostderr -v=2                                                                                                         │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ update-context │ functional-593346 update-context --alsologtostderr -v=2                                                                                                         │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ update-context │ functional-593346 update-context --alsologtostderr -v=2                                                                                                         │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	│ image          │ functional-593346 image ls                                                                                                                                      │ functional-593346 │ jenkins │ v1.37.0 │ 01 Nov 25 09:37 UTC │ 01 Nov 25 09:37 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:37:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:37:40.449290  554618 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:37:40.449415  554618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:37:40.449425  554618 out.go:374] Setting ErrFile to fd 2...
	I1101 09:37:40.449432  554618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:37:40.449796  554618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:37:40.450344  554618 out.go:368] Setting JSON to false
	I1101 09:37:40.451347  554618 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8397,"bootTime":1761981463,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:37:40.451449  554618 start.go:143] virtualization: kvm guest
	I1101 09:37:40.453100  554618 out.go:179] * [functional-593346] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 09:37:40.454676  554618 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 09:37:40.454689  554618 notify.go:221] Checking for updates...
	I1101 09:37:40.456864  554618 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:37:40.457964  554618 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 09:37:40.459008  554618 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 09:37:40.460018  554618 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:37:40.464316  554618 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:37:40.465738  554618 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:37:40.466327  554618 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:37:40.489953  554618 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:37:40.490069  554618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:37:40.551203  554618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 09:37:40.540249332 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:37:40.551388  554618 docker.go:319] overlay module found
	I1101 09:37:40.553518  554618 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 09:37:40.554541  554618 start.go:309] selected driver: docker
	I1101 09:37:40.554559  554618 start.go:930] validating driver "docker" against &{Name:functional-593346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-593346 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:37:40.554686  554618 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:37:40.556319  554618 out.go:203] 
	W1101 09:37:40.557307  554618 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 09:37:40.558188  554618 out.go:203] 
	
	
	==> CRI-O <==
	Nov 01 09:37:47 functional-593346 crio[3594]: time="2025-11-01T09:37:47.556134926Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:37:47 functional-593346 crio[3594]: time="2025-11-01T09:37:47.556312093Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5b6d7032b96f67cbb7c5e1c6209e82e37be6d564713bddf0b3018679f34985c1/merged/etc/group: no such file or directory"
	Nov 01 09:37:47 functional-593346 crio[3594]: time="2025-11-01T09:37:47.556612625Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:37:47 functional-593346 crio[3594]: time="2025-11-01T09:37:47.584461889Z" level=info msg="Created container 7f916738e3a797da1e00fd2f8b705c44a3960f659f6e244d34e310709e028898: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7frnm/dashboard-metrics-scraper" id=92fd4072-28af-4497-8da2-f9cbee28ddf9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:37:47 functional-593346 crio[3594]: time="2025-11-01T09:37:47.585152082Z" level=info msg="Starting container: 7f916738e3a797da1e00fd2f8b705c44a3960f659f6e244d34e310709e028898" id=8327f3d4-2dca-4e8b-b30c-4fbc81862715 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:37:47 functional-593346 crio[3594]: time="2025-11-01T09:37:47.587033804Z" level=info msg="Started container" PID=7369 containerID=7f916738e3a797da1e00fd2f8b705c44a3960f659f6e244d34e310709e028898 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7frnm/dashboard-metrics-scraper id=8327f3d4-2dca-4e8b-b30c-4fbc81862715 name=/runtime.v1.RuntimeService/StartContainer sandboxID=114b8aec19fea4e6682816136db4658d04e732af8b7103a1e23d4173bceafced
	Nov 01 09:37:51 functional-593346 crio[3594]: time="2025-11-01T09:37:51.388992154Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=f183a3dd-3d57-4f47-a86c-8c47e8415d6d name=/runtime.v1.ImageService/PullImage
	Nov 01 09:37:51 functional-593346 crio[3594]: time="2025-11-01T09:37:51.389766171Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=bfa87ba2-0612-4dda-977c-d53f3131c469 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:37:51 functional-593346 crio[3594]: time="2025-11-01T09:37:51.391712201Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=2e147d1e-8634-49dd-b2cf-f140632ce103 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:37:51 functional-593346 crio[3594]: time="2025-11-01T09:37:51.408418656Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-krh4k/kubernetes-dashboard" id=2831e993-80d2-48af-a361-b9b98c55b215 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:37:51 functional-593346 crio[3594]: time="2025-11-01T09:37:51.408928762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:37:51 functional-593346 crio[3594]: time="2025-11-01T09:37:51.468946351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:37:51 functional-593346 crio[3594]: time="2025-11-01T09:37:51.469146739Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3c32b4cc332ba08e13c77d9c673cfe29e454a64ff7723a9d1007555ab531e502/merged/etc/group: no such file or directory"
	Nov 01 09:37:51 functional-593346 crio[3594]: time="2025-11-01T09:37:51.469568679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:37:51 functional-593346 crio[3594]: time="2025-11-01T09:37:51.504192473Z" level=info msg="Created container d42433c7cb548d2b7aff181b44e3c211ccbc36100d5409da3580c55960df9af7: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-krh4k/kubernetes-dashboard" id=2831e993-80d2-48af-a361-b9b98c55b215 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:37:51 functional-593346 crio[3594]: time="2025-11-01T09:37:51.504977155Z" level=info msg="Starting container: d42433c7cb548d2b7aff181b44e3c211ccbc36100d5409da3580c55960df9af7" id=4a0a6a30-7466-4f3b-a875-5e559cc0abce name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:37:51 functional-593346 crio[3594]: time="2025-11-01T09:37:51.506819982Z" level=info msg="Started container" PID=7652 containerID=d42433c7cb548d2b7aff181b44e3c211ccbc36100d5409da3580c55960df9af7 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-krh4k/kubernetes-dashboard id=4a0a6a30-7466-4f3b-a875-5e559cc0abce name=/runtime.v1.RuntimeService/StartContainer sandboxID=12ccf5ab2bc96a3debcae50142b27d025ae2c94f9240f124cd8ccc66b320d5c9
	Nov 01 09:37:51 functional-593346 crio[3594]: time="2025-11-01T09:37:51.547778841Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=400dd1a9-05ca-4e41-a8bc-dfc38802f844 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:38:00 functional-593346 crio[3594]: time="2025-11-01T09:38:00.548189044Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=699cc4b4-a44c-4cd8-b7cc-18362369b91d name=/runtime.v1.ImageService/PullImage
	Nov 01 09:38:38 functional-593346 crio[3594]: time="2025-11-01T09:38:38.548472259Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=db1c4441-ad71-4513-9fbb-3d62efde3c66 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:38:41 functional-593346 crio[3594]: time="2025-11-01T09:38:41.548596274Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7e4cfa38-46cc-4597-987b-dde97fe3d585 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:40:00 functional-593346 crio[3594]: time="2025-11-01T09:40:00.547818105Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3c26a9e1-ca73-48df-8e74-1c55484e6fb4 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:40:12 functional-593346 crio[3594]: time="2025-11-01T09:40:12.548397877Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4b6be78e-2236-4fbc-9406-fa7c0406fa17 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:42:48 functional-593346 crio[3594]: time="2025-11-01T09:42:48.547804323Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=18bcb91e-d9d9-4272-ab98-11eaa2ee2ea6 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:42:53 functional-593346 crio[3594]: time="2025-11-01T09:42:53.548225636Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=317bfc48-afec-415a-91b0-2af0c181446c name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d42433c7cb548       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   12ccf5ab2bc96       kubernetes-dashboard-855c9754f9-krh4k        kubernetes-dashboard
	7f916738e3a79       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   114b8aec19fea       dashboard-metrics-scraper-77bf4d6c4c-7frnm   kubernetes-dashboard
	71a97dc24c2d5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   34c06f9cd820e       busybox-mount                                default
	c7a26bd79eefa       docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58                  9 minutes ago       Running             myfrontend                  0                   be1ea1434ec80       sp-pod                                       default
	3f6aa8e3af288       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   d085798d0c713       mysql-5bb876957f-d4mbk                       default
	5adfde8f34c32       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  9 minutes ago       Running             nginx                       0                   db3d5354a9200       nginx-svc                                    default
	be5bd4b7f1269       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   9de8899048cbe       kube-apiserver-functional-593346             kube-system
	2d39d3e86032b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   2dc1c1d58844b       kube-scheduler-functional-593346             kube-system
	4be7d32dfe7ff       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   ca0215fd2c828       kube-controller-manager-functional-593346    kube-system
	d75f5d09bebd7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   54d7dd47e0c81       etcd-functional-593346                       kube-system
	59d54c17d8489       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   ca0215fd2c828       kube-controller-manager-functional-593346    kube-system
	f2c18382566a2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   bddcd6b86ba32       coredns-66bc5c9577-mbpgf                     kube-system
	858636ef725b4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   72e8648daf424       storage-provisioner                          kube-system
	82dd597630e67       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   a95ff420b4377       kube-proxy-2hqgm                             kube-system
	ef185696a101e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   3233d193476a1       kindnet-hmk7n                                kube-system
	dce0c4de00726       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   bddcd6b86ba32       coredns-66bc5c9577-mbpgf                     kube-system
	979d803363315       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   72e8648daf424       storage-provisioner                          kube-system
	66d1d3f5b44b6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   3233d193476a1       kindnet-hmk7n                                kube-system
	6f62c49a90e09       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   a95ff420b4377       kube-proxy-2hqgm                             kube-system
	24719fe1c42a7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   54d7dd47e0c81       etcd-functional-593346                       kube-system
	10297a5f93bef       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   2dc1c1d58844b       kube-scheduler-functional-593346             kube-system
	
	
	==> coredns [dce0c4de00726613b0dc978e7e2419e14002090a3aa81f1c394d92b46e0a44d4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35017 - 56956 "HINFO IN 6921106987466014772.4365780125473054558. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032480055s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f2c18382566a2b303974d185992f8cd5413da056d1c4bb597f35f178a08127f4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37251 - 1178 "HINFO IN 1868172919654378450.730812140221855817. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.031382349s
	
	
	==> describe nodes <==
	Name:               functional-593346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-593346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=functional-593346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_35_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:35:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-593346
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:47:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:45:25 +0000   Sat, 01 Nov 2025 09:35:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:45:25 +0000   Sat, 01 Nov 2025 09:35:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:45:25 +0000   Sat, 01 Nov 2025 09:35:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:45:25 +0000   Sat, 01 Nov 2025 09:35:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-593346
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                547353e7-fbbe-4f31-97fa-e9c3a280a36a
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-llwfk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  default                     hello-node-connect-7d85dfc575-qqnvc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-d4mbk                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	  kube-system                 coredns-66bc5c9577-mbpgf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-593346                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-hmk7n                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-593346              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-593346     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-2hqgm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-593346              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-7frnm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m26s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-krh4k         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-593346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-593346 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-593346 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-593346 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-593346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-593346 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           11m                node-controller  Node functional-593346 event: Registered Node functional-593346 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-593346 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x9 over 10m)  kubelet          Node functional-593346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-593346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-593346 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-593346 event: Registered Node functional-593346 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [24719fe1c42a7e4a29ad150ac3d4398c4abf5a8a20151eb2e0b4018871f732c1] <==
	{"level":"warn","ts":"2025-11-01T09:35:38.230904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:35:38.239266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:35:38.245620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:35:38.258485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:35:38.264890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:35:38.272058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:35:38.324501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34504","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:36:23.185256Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T09:36:23.185376Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-593346","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-01T09:36:23.185485Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:36:30.187177Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:36:30.187272Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T09:36:30.187349Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:36:30.187390Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:36:30.187401Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:36:30.187348Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-01T09:36:30.187437Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-01T09:36:30.187443Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-01T09:36:30.187450Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T09:36:30.187457Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:36:30.187466Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:36:30.189579Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-01T09:36:30.189643Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:36:30.189670Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-01T09:36:30.189679Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-593346","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [d75f5d09bebd77defb6cd3b506bfb9623f0f966cd855c664577047d0fb0c2085] <==
	{"level":"warn","ts":"2025-11-01T09:36:44.054179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.060467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.066800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.073271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.079709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.086264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.092719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.099145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.105640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.112226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.118531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.124930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.131528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.138673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.144818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.151467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.157976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.164322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.179828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.186410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.194808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:44.235951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52622","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:46:43.759163Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1157}
	{"level":"info","ts":"2025-11-01T09:46:43.778107Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1157,"took":"18.503267ms","hash":159686303,"current-db-size-bytes":3477504,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-01T09:46:43.778183Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":159686303,"revision":1157,"compact-revision":-1}
	
	
	==> kernel <==
	 09:47:11 up  2:29,  0 user,  load average: 0.35, 0.28, 5.91
	Linux functional-593346 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [66d1d3f5b44b6b393542cdfd98cc9f3922fb1a6cab2dc6743134292151c88204] <==
	I1101 09:35:47.104249       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:35:47.104535       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1101 09:35:47.104708       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:35:47.104731       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:35:47.104746       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:35:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:35:47.403396       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:35:47.403455       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:35:47.403472       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:35:47.403865       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:35:47.792573       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:35:47.792654       1 metrics.go:72] Registering metrics
	I1101 09:35:47.793035       1 controller.go:711] "Syncing nftables rules"
	I1101 09:35:57.395867       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:35:57.395944       1 main.go:301] handling current node
	I1101 09:36:07.402177       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:36:07.402211       1 main.go:301] handling current node
	I1101 09:36:17.395018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:36:17.395071       1 main.go:301] handling current node
	
	
	==> kindnet [ef185696a101e7483ed45be22db8d9fc132c10051ab73d68df6b2021914d2475] <==
	I1101 09:45:03.618443       1 main.go:301] handling current node
	I1101 09:45:13.616258       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:45:13.616294       1 main.go:301] handling current node
	I1101 09:45:23.617136       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:45:23.617182       1 main.go:301] handling current node
	I1101 09:45:33.619702       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:45:33.619743       1 main.go:301] handling current node
	I1101 09:45:43.619559       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:45:43.619612       1 main.go:301] handling current node
	I1101 09:45:53.624675       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:45:53.624713       1 main.go:301] handling current node
	I1101 09:46:03.615832       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:46:03.615899       1 main.go:301] handling current node
	I1101 09:46:13.617033       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:46:13.617111       1 main.go:301] handling current node
	I1101 09:46:23.615336       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:46:23.615382       1 main.go:301] handling current node
	I1101 09:46:33.616274       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:46:33.616313       1 main.go:301] handling current node
	I1101 09:46:43.624358       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:46:43.624398       1 main.go:301] handling current node
	I1101 09:46:53.624522       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:46:53.624561       1 main.go:301] handling current node
	I1101 09:47:03.616269       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:47:03.616322       1 main.go:301] handling current node
	
	
	==> kube-apiserver [be5bd4b7f1269da589294239c95fa494ed28c0da9c8aff3226f4f8f63f48c889] <==
	I1101 09:36:44.756009       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:36:45.607317       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:36:45.662037       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1101 09:36:45.823929       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1101 09:36:45.825169       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:36:45.829384       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:36:46.411602       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:36:46.512263       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:36:46.573204       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:36:46.580234       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:36:52.276720       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:37:05.990322       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.102.36"}
	I1101 09:37:09.882660       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.123.242"}
	I1101 09:37:10.829270       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.138.125"}
	I1101 09:37:11.827128       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.244.111"}
	I1101 09:37:21.702698       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.66.121"}
	E1101 09:37:27.953406       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53696: use of closed network connection
	E1101 09:37:28.668000       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53704: use of closed network connection
	E1101 09:37:30.758354       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53718: use of closed network connection
	E1101 09:37:31.952197       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53758: use of closed network connection
	E1101 09:37:39.983345       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56962: use of closed network connection
	I1101 09:37:45.226462       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:37:45.328080       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.83.28"}
	I1101 09:37:45.340207       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.66.122"}
	I1101 09:46:44.651778       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [4be7d32dfe7ff47b19c64d3fbb4d4037b2a3c820f9329f1fcb4650423292c16c] <==
	I1101 09:36:48.035413       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:36:48.035566       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:36:48.035651       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-593346"
	I1101 09:36:48.035708       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:36:48.040458       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:36:48.040479       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:36:48.040565       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:36:48.040573       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:36:48.040610       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:36:48.041655       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:36:48.041683       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:36:48.041719       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:36:48.041733       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:36:48.042437       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:36:48.042527       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:36:48.045016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:36:48.051193       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:36:48.053503       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:36:48.061910       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 09:37:45.271696       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:37:45.276143       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:37:45.280150       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:37:45.280508       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:37:45.284578       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:37:45.290417       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [59d54c17d8489fa53c5a2c6bfc91645df5780b1c2505ff10b606fadbacad4344] <==
	I1101 09:36:32.468295       1 shared_informer.go:349] "Waiting for caches to sync" controller="TTL"
	I1101 09:36:32.518054       1 controllermanager.go:781] "Started controller" controller="clusterrole-aggregation-controller"
	I1101 09:36:32.518081       1 controllermanager.go:744] "Warning: controller is disabled" controller="selinux-warning-controller"
	I1101 09:36:32.518143       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1101 09:36:32.518150       1 shared_informer.go:349] "Waiting for caches to sync" controller="ClusterRoleAggregator"
	I1101 09:36:32.570557       1 controllermanager.go:781] "Started controller" controller="endpoints-controller"
	I1101 09:36:32.570625       1 endpoints_controller.go:188] "Starting endpoint controller" logger="endpoints-controller"
	I1101 09:36:32.570637       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint"
	I1101 09:36:32.768281       1 controllermanager.go:781] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1101 09:36:32.768309       1 horizontal.go:205] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1101 09:36:32.768326       1 shared_informer.go:349] "Waiting for caches to sync" controller="HPA"
	I1101 09:36:32.868600       1 controllermanager.go:781] "Started controller" controller="disruption-controller"
	I1101 09:36:32.868659       1 disruption.go:457] "Sending events to api server." logger="disruption-controller"
	I1101 09:36:32.868698       1 disruption.go:468] "Starting disruption controller" logger="disruption-controller"
	I1101 09:36:32.868707       1 shared_informer.go:349] "Waiting for caches to sync" controller="disruption"
	I1101 09:36:32.915512       1 shared_informer.go:356] "Caches are synced" controller="tokens"
	I1101 09:36:32.918434       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1101 09:36:32.918493       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1101 09:36:32.918503       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrapproving"
	I1101 09:36:32.970432       1 controllermanager.go:781] "Started controller" controller="token-cleaner-controller"
	I1101 09:36:32.970462       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I1101 09:36:32.970525       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1101 09:36:32.970534       1 shared_informer.go:349] "Waiting for caches to sync" controller="token_cleaner"
	I1101 09:36:32.970543       1 shared_informer.go:356] "Caches are synced" controller="token_cleaner"
	F1101 09:36:33.015828       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/ephemeral-volume-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [6f62c49a90e09dbb8a125644c89a0bd33ab1a5d9a35131b92dd6ec5a8d5398b5] <==
	I1101 09:35:46.937584       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:35:47.011822       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:35:47.112338       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:35:47.112380       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:35:47.112476       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:35:47.131432       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:35:47.131487       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:35:47.137490       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:35:47.137957       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:35:47.137995       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:35:47.141640       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:35:47.141725       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:35:47.141767       1 config.go:309] "Starting node config controller"
	I1101 09:35:47.141833       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:35:47.141861       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:35:47.141776       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:35:47.141873       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:35:47.141722       1 config.go:200] "Starting service config controller"
	I1101 09:35:47.141890       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:35:47.242614       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:35:47.242634       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:35:47.242626       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [82dd597630e676f0fd37f871f2132f7775dcca82dec499c16ec3f6e80a6b5af7] <==
	I1101 09:36:23.242528       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:36:23.309952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:36:23.411015       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:36:23.411062       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:36:23.411191       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:36:23.431127       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:36:23.431201       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:36:23.437108       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:36:23.437502       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:36:23.437566       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:36:23.438951       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:36:23.438988       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:36:23.438975       1 config.go:200] "Starting service config controller"
	I1101 09:36:23.439006       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:36:23.439029       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:36:23.439066       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:36:23.439115       1 config.go:309] "Starting node config controller"
	I1101 09:36:23.439149       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:36:23.439158       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:36:23.539234       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:36:23.539281       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:36:23.539257       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1101 09:36:44.654404       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:36:44.654478       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1101 09:36:44.654506       1 reflector.go:205] "Failed to watch" err="nodes \"functional-593346\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [10297a5f93bef0d0660b3135763ea1baa49edcbcd1ec01b96c0392d5a9224e1e] <==
	E1101 09:35:38.731849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:35:38.731876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:35:38.731895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:35:38.731962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:35:38.732000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:35:38.732135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:35:39.544587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:35:39.553039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:35:39.582694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:35:39.589830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:35:39.628706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:35:39.655324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:35:39.743195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:35:39.795494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:35:39.811807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:35:39.943522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:35:39.971742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:35:39.975728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1101 09:35:41.328827       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:36:40.807597       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 09:36:40.807636       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 09:36:40.807644       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1101 09:36:40.807641       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 09:36:40.807665       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1101 09:36:40.807671       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [2d39d3e86032b6bf0e86959956bf8ab2b382725ecce20830621626f27fd3f92b] <==
	I1101 09:36:43.698026       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:36:44.646251       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:36:44.646309       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:36:44.646323       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:36:44.646332       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:36:44.666220       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:36:44.666253       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:36:44.668293       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:36:44.668338       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:36:44.668571       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:36:44.668641       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:36:44.769289       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:44:34 functional-593346 kubelet[4302]: E1101 09:44:34.548222    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qqnvc" podUID="7c53dd94-dc07-41b5-9a2e-24866f379988"
	Nov 01 09:44:42 functional-593346 kubelet[4302]: E1101 09:44:42.548595    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-llwfk" podUID="885c8cde-84ca-4dc8-a694-6fa947ca87c6"
	Nov 01 09:44:47 functional-593346 kubelet[4302]: E1101 09:44:47.548160    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qqnvc" podUID="7c53dd94-dc07-41b5-9a2e-24866f379988"
	Nov 01 09:44:57 functional-593346 kubelet[4302]: E1101 09:44:57.548098    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-llwfk" podUID="885c8cde-84ca-4dc8-a694-6fa947ca87c6"
	Nov 01 09:45:02 functional-593346 kubelet[4302]: E1101 09:45:02.548705    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qqnvc" podUID="7c53dd94-dc07-41b5-9a2e-24866f379988"
	Nov 01 09:45:11 functional-593346 kubelet[4302]: E1101 09:45:11.547283    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-llwfk" podUID="885c8cde-84ca-4dc8-a694-6fa947ca87c6"
	Nov 01 09:45:14 functional-593346 kubelet[4302]: E1101 09:45:14.549799    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qqnvc" podUID="7c53dd94-dc07-41b5-9a2e-24866f379988"
	Nov 01 09:45:22 functional-593346 kubelet[4302]: E1101 09:45:22.548080    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-llwfk" podUID="885c8cde-84ca-4dc8-a694-6fa947ca87c6"
	Nov 01 09:45:25 functional-593346 kubelet[4302]: E1101 09:45:25.547313    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qqnvc" podUID="7c53dd94-dc07-41b5-9a2e-24866f379988"
	Nov 01 09:45:34 functional-593346 kubelet[4302]: E1101 09:45:34.547776    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-llwfk" podUID="885c8cde-84ca-4dc8-a694-6fa947ca87c6"
	Nov 01 09:45:37 functional-593346 kubelet[4302]: E1101 09:45:37.547804    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qqnvc" podUID="7c53dd94-dc07-41b5-9a2e-24866f379988"
	Nov 01 09:45:47 functional-593346 kubelet[4302]: E1101 09:45:47.547774    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-llwfk" podUID="885c8cde-84ca-4dc8-a694-6fa947ca87c6"
	Nov 01 09:45:48 functional-593346 kubelet[4302]: E1101 09:45:48.548205    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qqnvc" podUID="7c53dd94-dc07-41b5-9a2e-24866f379988"
	Nov 01 09:45:59 functional-593346 kubelet[4302]: E1101 09:45:59.547585    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qqnvc" podUID="7c53dd94-dc07-41b5-9a2e-24866f379988"
	Nov 01 09:46:01 functional-593346 kubelet[4302]: E1101 09:46:01.547418    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-llwfk" podUID="885c8cde-84ca-4dc8-a694-6fa947ca87c6"
	Nov 01 09:46:11 functional-593346 kubelet[4302]: E1101 09:46:11.547660    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qqnvc" podUID="7c53dd94-dc07-41b5-9a2e-24866f379988"
	Nov 01 09:46:16 functional-593346 kubelet[4302]: E1101 09:46:16.550268    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-llwfk" podUID="885c8cde-84ca-4dc8-a694-6fa947ca87c6"
	Nov 01 09:46:25 functional-593346 kubelet[4302]: E1101 09:46:25.547298    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qqnvc" podUID="7c53dd94-dc07-41b5-9a2e-24866f379988"
	Nov 01 09:46:31 functional-593346 kubelet[4302]: E1101 09:46:31.548160    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-llwfk" podUID="885c8cde-84ca-4dc8-a694-6fa947ca87c6"
	Nov 01 09:46:36 functional-593346 kubelet[4302]: E1101 09:46:36.547536    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qqnvc" podUID="7c53dd94-dc07-41b5-9a2e-24866f379988"
	Nov 01 09:46:43 functional-593346 kubelet[4302]: E1101 09:46:43.547173    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-llwfk" podUID="885c8cde-84ca-4dc8-a694-6fa947ca87c6"
	Nov 01 09:46:49 functional-593346 kubelet[4302]: E1101 09:46:49.548223    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qqnvc" podUID="7c53dd94-dc07-41b5-9a2e-24866f379988"
	Nov 01 09:46:54 functional-593346 kubelet[4302]: E1101 09:46:54.547971    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-llwfk" podUID="885c8cde-84ca-4dc8-a694-6fa947ca87c6"
	Nov 01 09:47:03 functional-593346 kubelet[4302]: E1101 09:47:03.547445    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qqnvc" podUID="7c53dd94-dc07-41b5-9a2e-24866f379988"
	Nov 01 09:47:07 functional-593346 kubelet[4302]: E1101 09:47:07.548219    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-llwfk" podUID="885c8cde-84ca-4dc8-a694-6fa947ca87c6"
	
	
	==> kubernetes-dashboard [d42433c7cb548d2b7aff181b44e3c211ccbc36100d5409da3580c55960df9af7] <==
	2025/11/01 09:37:51 Starting overwatch
	2025/11/01 09:37:51 Using namespace: kubernetes-dashboard
	2025/11/01 09:37:51 Using in-cluster config to connect to apiserver
	2025/11/01 09:37:51 Using secret token for csrf signing
	2025/11/01 09:37:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:37:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:37:51 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:37:51 Generating JWE encryption key
	2025/11/01 09:37:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:37:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:37:51 Initializing JWE encryption key from synchronized object
	2025/11/01 09:37:51 Creating in-cluster Sidecar client
	2025/11/01 09:37:51 Serving insecurely on HTTP port: 9090
	2025/11/01 09:37:51 Successful request to sidecar
	
	
	==> storage-provisioner [858636ef725b4e01c1831c118b53a36b0534c8aede1ab606912dd02ef3e1de0e] <==
	W1101 09:46:47.947711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:46:49.951245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:46:49.955348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:46:51.958252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:46:51.963324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:46:53.966651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:46:53.970722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:46:55.974252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:46:55.978115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:46:57.981410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:46:57.985378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:46:59.988648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:46:59.993127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:47:01.996579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:47:02.001821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:47:04.005133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:47:04.010302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:47:06.013889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:47:06.018681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:47:08.022288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:47:08.026433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:47:10.030441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:47:10.034924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:47:12.038420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:47:12.044170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [979d803363315acbfe3dea486cbbe82c228eedd4db88020ca4003454344032dd] <==
	I1101 09:35:57.985170       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-593346_913112a1-a524-4a4c-ad62-e821196db361!
	W1101 09:35:59.893798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:35:59.898288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:01.901919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:01.906119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:03.909687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:03.913861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:05.916925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:05.921652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:07.925160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:07.929053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:09.932809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:09.937758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:11.941953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:11.947419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:13.950446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:13.954241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:15.957708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:15.961374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:17.965254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:17.970707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:19.974199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:19.979342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:21.982525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:36:21.986423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-593346 -n functional-593346
helpers_test.go:269: (dbg) Run:  kubectl --context functional-593346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-llwfk hello-node-connect-7d85dfc575-qqnvc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-593346 describe pod busybox-mount hello-node-75c85bcc94-llwfk hello-node-connect-7d85dfc575-qqnvc
helpers_test.go:290: (dbg) kubectl --context functional-593346 describe pod busybox-mount hello-node-75c85bcc94-llwfk hello-node-connect-7d85dfc575-qqnvc:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-593346/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 09:37:33 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://71a97dc24c2d5a90a51fa98aa0c5ec02433a4b089de8c8500e05dabc29145109
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 09:37:36 +0000
	      Finished:     Sat, 01 Nov 2025 09:37:36 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k8vs4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-k8vs4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m39s  default-scheduler  Successfully assigned default/busybox-mount to functional-593346
	  Normal  Pulling    9m38s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m36s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.269s (2.638s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m36s  kubelet            Created container: mount-munger
	  Normal  Started    9m36s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-llwfk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-593346/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 09:37:21 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v9l4b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-v9l4b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m51s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-llwfk to functional-593346
	  Normal   Pulling    7m (x5 over 9m51s)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m (x5 over 9m47s)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m (x5 over 9m47s)      kubelet            Error: ErrImagePull
	  Normal   BackOff    4m43s (x21 over 9m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m43s (x21 over 9m47s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-qqnvc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-593346/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 09:37:09 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ncbww (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ncbww:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qqnvc to functional-593346
	  Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m49s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m49s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-593346 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-593346 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-llwfk" [885c8cde-84ca-4dc8-a694-6fa947ca87c6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-593346 -n functional-593346
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-01 09:47:22.049360971 +0000 UTC m=+1149.755261516
functional_test.go:1460: (dbg) Run:  kubectl --context functional-593346 describe po hello-node-75c85bcc94-llwfk -n default
functional_test.go:1460: (dbg) kubectl --context functional-593346 describe po hello-node-75c85bcc94-llwfk -n default:
Name:             hello-node-75c85bcc94-llwfk
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-593346/192.168.49.2
Start Time:       Sat, 01 Nov 2025 09:37:21 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v9l4b (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-v9l4b:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-llwfk to functional-593346
Normal   Pulling    7m10s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 9m57s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 9m57s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-593346 logs hello-node-75c85bcc94-llwfk -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-593346 logs hello-node-75c85bcc94-llwfk -n default: exit status 1 (64.852754ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-llwfk" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-593346 logs hello-node-75c85bcc94-llwfk -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image load --daemon kicbase/echo-server:functional-593346 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-593346" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image load --daemon kicbase/echo-server:functional-593346 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-593346" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-593346
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image load --daemon kicbase/echo-server:functional-593346 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-593346" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image save kicbase/echo-server:functional-593346 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1101 09:37:47.215778  556947 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:37:47.216121  556947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:37:47.216135  556947 out.go:374] Setting ErrFile to fd 2...
	I1101 09:37:47.216140  556947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:37:47.216439  556947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:37:47.217311  556947 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:37:47.217473  556947 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:37:47.218092  556947 cli_runner.go:164] Run: docker container inspect functional-593346 --format={{.State.Status}}
	I1101 09:37:47.240642  556947 ssh_runner.go:195] Run: systemctl --version
	I1101 09:37:47.240713  556947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-593346
	I1101 09:37:47.261282  556947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/functional-593346/id_rsa Username:docker}
	I1101 09:37:47.367961  556947 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1101 09:37:47.368039  556947 cache_images.go:255] Failed to load cached images for "functional-593346": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1101 09:37:47.368078  556947 cache_images.go:267] failed pushing to: functional-593346

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-593346
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image save --daemon kicbase/echo-server:functional-593346 --alsologtostderr
E1101 09:37:47.442011  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-593346
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-593346: exit status 1 (17.997098ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-593346

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-593346

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593346 service --namespace=default --https --url hello-node: exit status 115 (554.590775ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32497
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-593346 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593346 service hello-node --url --format={{.IP}}: exit status 115 (558.619235ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-593346 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593346 service hello-node --url: exit status 115 (559.319676ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32497
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-593346 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32497
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.1s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-748163 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-748163 --output=json --user=testUser: exit status 80 (2.102828402s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9b4d3c04-2b3a-4a1c-9961-1e5752c2594b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-748163 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"a45149a3-21af-49ba-a58d-84d09a4a4b7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T09:57:26Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"a511b041-50bf-4dbc-8263-347b748ef96b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-748163 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.10s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.5s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-748163 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-748163 --output=json --user=testUser: exit status 80 (1.4995928s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"410b3e5b-82fd-46fd-a879-e2fd3733c73e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-748163 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"d8bcfcf9-17c3-4a64-8e07-883437f42bc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T09:57:28Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"f137a6ff-0e96-4743-ac5a-32ebc6c2a08b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-748163 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.50s)

                                                
                                    
x
+
TestPreload (437.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-619273 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1101 10:06:25.495046  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-619273 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.518967036s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-619273 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-619273 image pull gcr.io/k8s-minikube/busybox: (2.34705818s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-619273
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-619273: (5.905484103s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-619273 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1101 10:07:09.891033  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:08:32.962372  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:09:28.569014  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:11:25.496366  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:09.890477  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-619273 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (6m18.103736044s)

                                                
                                                
-- stdout --
	* [test-preload-619273] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	* Using the docker driver based on existing profile
	* Starting "test-preload-619273" primary control-plane node in "test-preload-619273" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Downloading Kubernetes v1.32.0 preload ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:06:53.782649  678208 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:06:53.782964  678208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:06:53.782976  678208 out.go:374] Setting ErrFile to fd 2...
	I1101 10:06:53.782980  678208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:06:53.783242  678208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:06:53.783772  678208 out.go:368] Setting JSON to false
	I1101 10:06:53.784794  678208 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10151,"bootTime":1761981463,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:06:53.784916  678208 start.go:143] virtualization: kvm guest
	I1101 10:06:53.786922  678208 out.go:179] * [test-preload-619273] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:06:53.788061  678208 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:06:53.788100  678208 notify.go:221] Checking for updates...
	I1101 10:06:53.790195  678208 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:06:53.791279  678208 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:06:53.792534  678208 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:06:53.793627  678208 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:06:53.794666  678208 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:06:53.796176  678208 config.go:182] Loaded profile config "test-preload-619273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 10:06:53.797637  678208 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 10:06:53.798556  678208 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:06:53.823614  678208 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:06:53.823717  678208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:06:53.883758  678208 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-01 10:06:53.87275899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:06:53.883950  678208 docker.go:319] overlay module found
	I1101 10:06:53.885814  678208 out.go:179] * Using the docker driver based on existing profile
	I1101 10:06:53.886948  678208 start.go:309] selected driver: docker
	I1101 10:06:53.886966  678208 start.go:930] validating driver "docker" against &{Name:test-preload-619273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-619273 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:06:53.887110  678208 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:06:53.887812  678208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:06:53.952044  678208 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-01 10:06:53.940726601 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:06:53.952322  678208 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:06:53.952355  678208 cni.go:84] Creating CNI manager for ""
	I1101 10:06:53.952412  678208 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:06:53.952449  678208 start.go:353] cluster config:
	{Name:test-preload-619273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-619273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:06:53.954366  678208 out.go:179] * Starting "test-preload-619273" primary control-plane node in "test-preload-619273" cluster
	I1101 10:06:53.955546  678208 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:06:53.956613  678208 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:06:53.957569  678208 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 10:06:53.957605  678208 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:06:53.978641  678208 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:06:53.978672  678208 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:06:54.063966  678208 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 10:06:54.063997  678208 cache.go:59] Caching tarball of preloaded images
	I1101 10:06:54.064178  678208 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 10:06:54.065908  678208 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1101 10:06:54.067023  678208 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 10:06:54.199526  678208 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1101 10:06:54.199579  678208 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 10:07:05.052963  678208 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1101 10:07:05.053164  678208 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/config.json ...
	I1101 10:07:05.053423  678208 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:07:05.053493  678208 start.go:360] acquireMachinesLock for test-preload-619273: {Name:mk4e57fdf3e52c3d778738348eedaa73a0e90e07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:07:05.053599  678208 start.go:364] duration metric: took 72.656µs to acquireMachinesLock for "test-preload-619273"
	I1101 10:07:05.053622  678208 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:07:05.053631  678208 fix.go:54] fixHost starting: 
	I1101 10:07:05.053944  678208 cli_runner.go:164] Run: docker container inspect test-preload-619273 --format={{.State.Status}}
	I1101 10:07:05.071074  678208 fix.go:112] recreateIfNeeded on test-preload-619273: state=Stopped err=<nil>
	W1101 10:07:05.071126  678208 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:07:05.072825  678208 out.go:252] * Restarting existing docker container for "test-preload-619273" ...
	I1101 10:07:05.072922  678208 cli_runner.go:164] Run: docker start test-preload-619273
	I1101 10:07:05.307109  678208 cli_runner.go:164] Run: docker container inspect test-preload-619273 --format={{.State.Status}}
	I1101 10:07:05.325508  678208 kic.go:430] container "test-preload-619273" state is running.
	I1101 10:07:05.325907  678208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-619273
	I1101 10:07:05.344165  678208 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/config.json ...
	I1101 10:07:05.344467  678208 machine.go:94] provisionDockerMachine start ...
	I1101 10:07:05.344535  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:05.363000  678208 main.go:143] libmachine: Using SSH client type: native
	I1101 10:07:05.363243  678208 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 10:07:05.363256  678208 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:07:05.363900  678208 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52494->127.0.0.1:33078: read: connection reset by peer
	I1101 10:07:08.509009  678208 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-619273
	
	I1101 10:07:08.509034  678208 ubuntu.go:182] provisioning hostname "test-preload-619273"
	I1101 10:07:08.509090  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:08.526610  678208 main.go:143] libmachine: Using SSH client type: native
	I1101 10:07:08.526828  678208 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 10:07:08.526859  678208 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-619273 && echo "test-preload-619273" | sudo tee /etc/hostname
	I1101 10:07:08.676774  678208 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-619273
	
	I1101 10:07:08.676911  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:08.694592  678208 main.go:143] libmachine: Using SSH client type: native
	I1101 10:07:08.694937  678208 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 10:07:08.694967  678208 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-619273' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-619273/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-619273' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:07:08.836705  678208 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:07:08.836735  678208 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:07:08.836774  678208 ubuntu.go:190] setting up certificates
	I1101 10:07:08.836785  678208 provision.go:84] configureAuth start
	I1101 10:07:08.836861  678208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-619273
	I1101 10:07:08.854423  678208 provision.go:143] copyHostCerts
	I1101 10:07:08.854495  678208 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:07:08.854509  678208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:07:08.854596  678208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:07:08.854736  678208 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:07:08.854748  678208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:07:08.854787  678208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:07:08.854904  678208 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:07:08.854916  678208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:07:08.854952  678208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:07:08.855033  678208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.test-preload-619273 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-619273]
	I1101 10:07:08.905866  678208 provision.go:177] copyRemoteCerts
	I1101 10:07:08.905930  678208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:07:08.905968  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:08.923184  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:09.024638  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:07:09.043259  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 10:07:09.061897  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:07:09.080045  678208 provision.go:87] duration metric: took 243.244432ms to configureAuth
	I1101 10:07:09.080074  678208 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:07:09.080247  678208 config.go:182] Loaded profile config "test-preload-619273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 10:07:09.080353  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:09.098308  678208 main.go:143] libmachine: Using SSH client type: native
	I1101 10:07:09.098546  678208 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 10:07:09.098562  678208 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:07:09.382922  678208 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:07:09.382956  678208 machine.go:97] duration metric: took 4.038472851s to provisionDockerMachine
	I1101 10:07:09.382974  678208 start.go:293] postStartSetup for "test-preload-619273" (driver="docker")
	I1101 10:07:09.382989  678208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:07:09.383070  678208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:07:09.383132  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:09.401772  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:09.503038  678208 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:07:09.506658  678208 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:07:09.506685  678208 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:07:09.506696  678208 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:07:09.506755  678208 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:07:09.506896  678208 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:07:09.507021  678208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:07:09.514712  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:07:09.532604  678208 start.go:296] duration metric: took 149.610611ms for postStartSetup
	I1101 10:07:09.532690  678208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:07:09.532744  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:09.550924  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:09.648141  678208 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:07:09.652980  678208 fix.go:56] duration metric: took 4.599329909s for fixHost
	I1101 10:07:09.653015  678208 start.go:83] releasing machines lock for "test-preload-619273", held for 4.599400282s
	I1101 10:07:09.653104  678208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-619273
	I1101 10:07:09.670401  678208 ssh_runner.go:195] Run: cat /version.json
	I1101 10:07:09.670457  678208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:07:09.670476  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:09.670532  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:09.688493  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:09.689238  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:09.786601  678208 ssh_runner.go:195] Run: systemctl --version
	I1101 10:07:09.839752  678208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:07:09.876928  678208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:07:09.881829  678208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:07:09.881918  678208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:07:09.890845  678208 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:07:09.890870  678208 start.go:496] detecting cgroup driver to use...
	I1101 10:07:09.890913  678208 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:07:09.890961  678208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:07:09.906297  678208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:07:09.919764  678208 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:07:09.919826  678208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:07:09.935198  678208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:07:09.948206  678208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:07:10.027985  678208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:07:10.111487  678208 docker.go:234] disabling docker service ...
	I1101 10:07:10.111566  678208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:07:10.126542  678208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:07:10.139334  678208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:07:10.219026  678208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:07:10.295486  678208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:07:10.308138  678208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:07:10.322402  678208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1101 10:07:10.322457  678208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.331943  678208 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:07:10.332019  678208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.341717  678208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.350913  678208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.360222  678208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:07:10.368811  678208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.378222  678208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.387042  678208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.396131  678208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:07:10.403624  678208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:07:10.411276  678208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:07:10.491944  678208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:07:10.602665  678208 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:07:10.602748  678208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:07:10.606921  678208 start.go:564] Will wait 60s for crictl version
	I1101 10:07:10.606993  678208 ssh_runner.go:195] Run: which crictl
	I1101 10:07:10.610756  678208 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:07:10.636374  678208 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:07:10.636444  678208 ssh_runner.go:195] Run: crio --version
	I1101 10:07:10.665010  678208 ssh_runner.go:195] Run: crio --version
	I1101 10:07:10.695085  678208 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	I1101 10:07:10.696023  678208 cli_runner.go:164] Run: docker network inspect test-preload-619273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:07:10.713154  678208 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:07:10.717497  678208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:07:10.727814  678208 kubeadm.go:884] updating cluster {Name:test-preload-619273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-619273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:07:10.727939  678208 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 10:07:10.727991  678208 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:07:10.759366  678208 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:07:10.759388  678208 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:07:10.759448  678208 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:07:10.785677  678208 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:07:10.785698  678208 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:07:10.785706  678208 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1101 10:07:10.785812  678208 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-619273 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-619273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:07:10.785910  678208 ssh_runner.go:195] Run: crio config
	I1101 10:07:10.832881  678208 cni.go:84] Creating CNI manager for ""
	I1101 10:07:10.832904  678208 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:07:10.832924  678208 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:07:10.832949  678208 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-619273 NodeName:test-preload-619273 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:07:10.833094  678208 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-619273"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:07:10.833173  678208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1101 10:07:10.841561  678208 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:07:10.841638  678208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:07:10.849466  678208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1101 10:07:10.861969  678208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:07:10.874587  678208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 10:07:10.887547  678208 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:07:10.891505  678208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:07:10.901558  678208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:07:10.984647  678208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:07:11.008473  678208 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273 for IP: 192.168.76.2
	I1101 10:07:11.008501  678208 certs.go:195] generating shared ca certs ...
	I1101 10:07:11.008523  678208 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:07:11.008703  678208 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:07:11.008743  678208 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:07:11.008752  678208 certs.go:257] generating profile certs ...
	I1101 10:07:11.008894  678208 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/client.key
	I1101 10:07:11.008999  678208 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/apiserver.key.9e880539
	I1101 10:07:11.009065  678208 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/proxy-client.key
	I1101 10:07:11.009208  678208 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:07:11.009329  678208 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:07:11.009364  678208 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:07:11.009424  678208 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:07:11.009457  678208 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:07:11.009489  678208 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:07:11.009553  678208 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:07:11.010442  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:07:11.029854  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:07:11.049997  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:07:11.070403  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:07:11.095047  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:07:11.113522  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:07:11.131522  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:07:11.149808  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:07:11.169269  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:07:11.187407  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:07:11.207280  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:07:11.225260  678208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:07:11.238190  678208 ssh_runner.go:195] Run: openssl version
	I1101 10:07:11.244562  678208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:07:11.253418  678208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:07:11.257389  678208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:07:11.257450  678208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:07:11.292490  678208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:07:11.301097  678208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:07:11.310008  678208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:07:11.313961  678208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:07:11.314033  678208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:07:11.348349  678208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:07:11.357060  678208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:07:11.365819  678208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:07:11.369821  678208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:07:11.369903  678208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:07:11.404926  678208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:07:11.413480  678208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:07:11.417291  678208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:07:11.451256  678208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:07:11.485388  678208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:07:11.528497  678208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:07:11.570904  678208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:07:11.608436  678208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:07:11.642800  678208 kubeadm.go:401] StartCluster: {Name:test-preload-619273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-619273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:07:11.642928  678208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:07:11.643017  678208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:07:11.670887  678208 cri.go:89] found id: ""
	I1101 10:07:11.670955  678208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:07:11.679443  678208 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:07:11.679467  678208 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:07:11.679517  678208 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:07:11.687608  678208 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:07:11.688085  678208 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-619273" does not appear in /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:07:11.688206  678208 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-514161/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-619273" cluster setting kubeconfig missing "test-preload-619273" context setting]
	I1101 10:07:11.688534  678208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:07:11.689117  678208 kapi.go:59] client config for test-preload-619273: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/client.key", CAFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:07:11.689558  678208 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:07:11.689573  678208 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:07:11.689577  678208 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:07:11.689582  678208 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:07:11.689591  678208 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:07:11.690003  678208 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:07:11.698090  678208 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:07:11.698130  678208 kubeadm.go:602] duration metric: took 18.656615ms to restartPrimaryControlPlane
	I1101 10:07:11.698143  678208 kubeadm.go:403] duration metric: took 55.355271ms to StartCluster
	I1101 10:07:11.698167  678208 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:07:11.698249  678208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:07:11.698832  678208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:07:11.699134  678208 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:07:11.699202  678208 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:07:11.699313  678208 addons.go:70] Setting storage-provisioner=true in profile "test-preload-619273"
	I1101 10:07:11.699331  678208 addons.go:239] Setting addon storage-provisioner=true in "test-preload-619273"
	W1101 10:07:11.699340  678208 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:07:11.699340  678208 addons.go:70] Setting default-storageclass=true in profile "test-preload-619273"
	I1101 10:07:11.699362  678208 config.go:182] Loaded profile config "test-preload-619273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 10:07:11.699374  678208 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-619273"
	I1101 10:07:11.699398  678208 host.go:66] Checking if "test-preload-619273" exists ...
	I1101 10:07:11.699688  678208 cli_runner.go:164] Run: docker container inspect test-preload-619273 --format={{.State.Status}}
	I1101 10:07:11.699946  678208 cli_runner.go:164] Run: docker container inspect test-preload-619273 --format={{.State.Status}}
	I1101 10:07:11.702218  678208 out.go:179] * Verifying Kubernetes components...
	I1101 10:07:11.703125  678208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:07:11.719075  678208 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:07:11.719690  678208 kapi.go:59] client config for test-preload-619273: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/client.key", CAFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:07:11.719985  678208 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:07:11.720009  678208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:07:11.720011  678208 addons.go:239] Setting addon default-storageclass=true in "test-preload-619273"
	W1101 10:07:11.720026  678208 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:07:11.720069  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:11.720071  678208 host.go:66] Checking if "test-preload-619273" exists ...
	I1101 10:07:11.720530  678208 cli_runner.go:164] Run: docker container inspect test-preload-619273 --format={{.State.Status}}
	I1101 10:07:11.745578  678208 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:07:11.745606  678208 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:07:11.745662  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:11.747048  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:11.764968  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:11.800620  678208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:07:11.813769  678208 node_ready.go:35] waiting up to 6m0s for node "test-preload-619273" to be "Ready" ...
	I1101 10:07:11.856969  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:07:11.871008  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:11.914390  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:11.914429  678208 retry.go:31] will retry after 155.027362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:11.928916  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:11.928955  678208 retry.go:31] will retry after 224.636284ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.070242  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:12.129032  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.129074  678208 retry.go:31] will retry after 332.35463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.154269  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:12.210812  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.210868  678208 retry.go:31] will retry after 389.142155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.461743  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:12.518473  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.518506  678208 retry.go:31] will retry after 352.10689ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.600727  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:12.656555  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.656597  678208 retry.go:31] will retry after 505.543838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.871474  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:12.929241  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.929283  678208 retry.go:31] will retry after 539.500119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:13.162674  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:13.220430  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:13.220465  678208 retry.go:31] will retry after 1.187425433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:13.469632  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:13.523809  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:13.523856  678208 retry.go:31] will retry after 676.936779ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:13.814864  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:14.201007  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:14.256739  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:14.256779  678208 retry.go:31] will retry after 1.777959804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:14.408568  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:14.465337  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:14.465368  678208 retry.go:31] will retry after 1.27465633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:15.740882  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:15.797872  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:15.797909  678208 retry.go:31] will retry after 2.563851359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:16.035865  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:16.092438  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:16.092484  678208 retry.go:31] will retry after 3.800152361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:16.315459  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:18.362624  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:18.420704  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:18.420739  678208 retry.go:31] will retry after 3.080458291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:18.814805  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:19.893010  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:19.948466  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:19.948507  678208 retry.go:31] will retry after 2.923862855s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:21.314396  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:21.501701  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:21.557131  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:21.557162  678208 retry.go:31] will retry after 4.639844377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:22.873459  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:22.929444  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:22.929495  678208 retry.go:31] will retry after 4.274933586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:23.315421  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:25.815363  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:26.197853  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:26.253422  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:26.253456  678208 retry.go:31] will retry after 4.893385712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:27.204648  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:27.264260  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:27.264294  678208 retry.go:31] will retry after 6.27111665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:28.314810  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:30.315161  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:31.147641  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:31.203317  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:31.203354  678208 retry.go:31] will retry after 6.884778373s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:32.315361  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:33.535630  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:33.590863  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:33.590900  678208 retry.go:31] will retry after 7.398744418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:34.814429  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:37.314434  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:38.088709  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:38.145795  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:38.145855  678208 retry.go:31] will retry after 11.044520788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:39.314881  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:40.990619  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:41.045055  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:41.045087  678208 retry.go:31] will retry after 18.291698485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:41.315036  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:43.814525  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:45.815370  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:48.315344  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:49.190901  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:49.247744  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:49.247786  678208 retry.go:31] will retry after 25.044173414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:50.815343  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:53.315354  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:55.815225  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:57.815347  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:59.337741  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:59.394029  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:59.394064  678208 retry.go:31] will retry after 46.12680889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:08:00.314953  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:02.315177  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:04.814364  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:07.314401  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:09.314700  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:11.814541  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:08:14.293142  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:08:14.315222  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:14.350703  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:08:14.350742  678208 retry.go:31] will retry after 25.407527422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:08:16.814501  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:18.814613  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:20.814680  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:22.814886  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:25.314567  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:27.314699  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:29.315035  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:31.315381  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:33.814680  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:35.815385  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:37.815466  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:08:39.759229  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:08:39.816604  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:08:39.816635  678208 retry.go:31] will retry after 26.467019386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:08:40.314730  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:42.814625  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:45.315332  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:08:45.521613  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:08:45.581362  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:08:45.581519  678208 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1101 10:08:47.315373  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:49.814403  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:51.814480  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:53.814631  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:56.314461  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:58.814535  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:01.314384  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:03.814602  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:09:06.283933  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:09:06.314993  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:06.341330  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:09:06.341466  678208 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 10:09:06.343274  678208 out.go:179] * Enabled addons: 
	I1101 10:09:06.344450  678208 addons.go:515] duration metric: took 1m54.645259125s for enable addons: enabled=[]
	W1101 10:09:08.814536  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:11.315352  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:13.315456  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:15.814621  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:18.314559  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:20.314783  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:22.315093  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:24.814352  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:27.315415  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:29.814573  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:31.814744  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:33.814934  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:35.815561  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:38.314595  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:40.314755  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:42.315113  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:44.814520  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:47.314397  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:49.314746  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:51.314825  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:53.315136  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:55.814531  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:57.814636  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:59.814792  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:01.815254  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:04.314644  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:06.814625  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:08.814979  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:10.815292  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:13.314460  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:15.315110  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:17.814582  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:19.815171  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:22.314760  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:24.314884  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:26.814780  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:28.814909  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:31.314925  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:33.814818  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:35.814974  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:37.815243  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:40.314468  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:42.314522  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:44.314922  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:46.315145  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:48.814930  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:51.314387  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:53.314470  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:55.314678  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:57.814571  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:00.314482  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:02.814541  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:04.814707  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:07.314539  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:09.315301  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:11.815244  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:14.314423  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:16.814443  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:18.814755  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:21.314484  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:23.314717  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:25.814443  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:28.314485  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:30.314753  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:32.315012  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:34.815426  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:37.314433  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:39.314735  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:41.814984  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:44.314568  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:46.814458  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:48.814746  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:51.314355  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:53.314402  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:55.315365  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:57.815349  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:00.315319  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:02.814380  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:04.814525  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:06.815341  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:09.314582  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:11.814478  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:13.814589  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:16.314583  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:18.814664  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:21.314434  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:23.814513  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:26.314408  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:28.315381  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:30.814480  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:33.314490  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:35.314586  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:37.814372  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:40.315384  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:42.814760  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:44.815296  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:47.314546  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:49.314933  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:51.314986  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:53.814770  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:55.815093  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:58.315491  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:13:00.815059  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:13:03.314608  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:13:05.315224  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:13:07.315363  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:13:09.814631  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:13:11.814122  678208 node_ready.go:38] duration metric: took 6m0.000298574s for node "test-preload-619273" to be "Ready" ...
	I1101 10:13:11.815935  678208 out.go:203] 
	W1101 10:13:11.816966  678208 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1101 10:13:11.816982  678208 out.go:285] * 
	* 
	W1101 10:13:11.818865  678208 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:13:11.819984  678208 out.go:203] 

                                                
                                                
** /stderr **
preload_test.go:67: out/minikube-linux-amd64 start -p test-preload-619273 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio failed: exit status 80
panic.go:636: *** TestPreload FAILED at 2025-11-01 10:13:11.859355888 +0000 UTC m=+2699.565256446
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect test-preload-619273
helpers_test.go:243: (dbg) docker inspect test-preload-619273:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c812f0b18072d56a8700aa5a84fd36e06fbcb5673e0118ed11ef4761d7005d0f",
	        "Created": "2025-11-01T10:05:58.857283269Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 678441,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:07:05.09654563Z",
	            "FinishedAt": "2025-11-01T10:06:53.348419909Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/c812f0b18072d56a8700aa5a84fd36e06fbcb5673e0118ed11ef4761d7005d0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c812f0b18072d56a8700aa5a84fd36e06fbcb5673e0118ed11ef4761d7005d0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/c812f0b18072d56a8700aa5a84fd36e06fbcb5673e0118ed11ef4761d7005d0f/hosts",
	        "LogPath": "/var/lib/docker/containers/c812f0b18072d56a8700aa5a84fd36e06fbcb5673e0118ed11ef4761d7005d0f/c812f0b18072d56a8700aa5a84fd36e06fbcb5673e0118ed11ef4761d7005d0f-json.log",
	        "Name": "/test-preload-619273",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-619273:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "test-preload-619273",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c812f0b18072d56a8700aa5a84fd36e06fbcb5673e0118ed11ef4761d7005d0f",
	                "LowerDir": "/var/lib/docker/overlay2/098b01b26bafc87cb049035abab44ecd4b7bdad59418bee6ec5228622a42580e-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/098b01b26bafc87cb049035abab44ecd4b7bdad59418bee6ec5228622a42580e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/098b01b26bafc87cb049035abab44ecd4b7bdad59418bee6ec5228622a42580e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/098b01b26bafc87cb049035abab44ecd4b7bdad59418bee6ec5228622a42580e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-619273",
	                "Source": "/var/lib/docker/volumes/test-preload-619273/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-619273",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-619273",
	                "name.minikube.sigs.k8s.io": "test-preload-619273",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2cb1fdb88481a55b1822e02d5912dae1827b19df365eb37a013cab5379a4207e",
	            "SandboxKey": "/var/run/docker/netns/2cb1fdb88481",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-619273": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:7c:2b:b8:d7:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d85ef76603bdd453743a2c6dc80681004889fa2e86917375a5cae17ec05071a7",
	                    "EndpointID": "ee2bdf7952683133bf6145d9580169e5b390cc2d557e371835d9aea1f1062266",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "test-preload-619273",
	                        "c812f0b18072"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-619273 -n test-preload-619273
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-619273 -n test-preload-619273: exit status 2 (318.710049ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-619273 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ multinode-955035 cp multinode-955035-m03:/home/docker/cp-test.txt multinode-955035:/home/docker/cp-test_multinode-955035-m03_multinode-955035.txt         │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ ssh     │ multinode-955035 ssh -n multinode-955035-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ ssh     │ multinode-955035 ssh -n multinode-955035 sudo cat /home/docker/cp-test_multinode-955035-m03_multinode-955035.txt                                          │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ cp      │ multinode-955035 cp multinode-955035-m03:/home/docker/cp-test.txt multinode-955035-m02:/home/docker/cp-test_multinode-955035-m03_multinode-955035-m02.txt │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ ssh     │ multinode-955035 ssh -n multinode-955035-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ ssh     │ multinode-955035 ssh -n multinode-955035-m02 sudo cat /home/docker/cp-test_multinode-955035-m03_multinode-955035-m02.txt                                  │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ node    │ multinode-955035 node stop m03                                                                                                                            │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ node    │ multinode-955035 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ node    │ list -p multinode-955035                                                                                                                                  │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │                     │
	│ stop    │ -p multinode-955035                                                                                                                                       │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ start   │ -p multinode-955035 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:04 UTC │
	│ node    │ list -p multinode-955035                                                                                                                                  │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:04 UTC │                     │
	│ node    │ multinode-955035 node delete m03                                                                                                                          │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:04 UTC │ 01 Nov 25 10:04 UTC │
	│ stop    │ multinode-955035 stop                                                                                                                                     │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:04 UTC │ 01 Nov 25 10:05 UTC │
	│ start   │ -p multinode-955035 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                          │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:05 UTC │ 01 Nov 25 10:05 UTC │
	│ node    │ list -p multinode-955035                                                                                                                                  │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:05 UTC │                     │
	│ start   │ -p multinode-955035-m02 --driver=docker  --container-runtime=crio                                                                                         │ multinode-955035-m02 │ jenkins │ v1.37.0 │ 01 Nov 25 10:05 UTC │                     │
	│ start   │ -p multinode-955035-m03 --driver=docker  --container-runtime=crio                                                                                         │ multinode-955035-m03 │ jenkins │ v1.37.0 │ 01 Nov 25 10:05 UTC │ 01 Nov 25 10:05 UTC │
	│ node    │ add -p multinode-955035                                                                                                                                   │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:05 UTC │                     │
	│ delete  │ -p multinode-955035-m03                                                                                                                                   │ multinode-955035-m03 │ jenkins │ v1.37.0 │ 01 Nov 25 10:05 UTC │ 01 Nov 25 10:05 UTC │
	│ delete  │ -p multinode-955035                                                                                                                                       │ multinode-955035     │ jenkins │ v1.37.0 │ 01 Nov 25 10:05 UTC │ 01 Nov 25 10:05 UTC │
	│ start   │ -p test-preload-619273 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0 │ test-preload-619273  │ jenkins │ v1.37.0 │ 01 Nov 25 10:05 UTC │ 01 Nov 25 10:06 UTC │
	│ image   │ test-preload-619273 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-619273  │ jenkins │ v1.37.0 │ 01 Nov 25 10:06 UTC │ 01 Nov 25 10:06 UTC │
	│ stop    │ -p test-preload-619273                                                                                                                                    │ test-preload-619273  │ jenkins │ v1.37.0 │ 01 Nov 25 10:06 UTC │ 01 Nov 25 10:06 UTC │
	│ start   │ -p test-preload-619273 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                         │ test-preload-619273  │ jenkins │ v1.37.0 │ 01 Nov 25 10:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:06:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:06:53.782649  678208 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:06:53.782964  678208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:06:53.782976  678208 out.go:374] Setting ErrFile to fd 2...
	I1101 10:06:53.782980  678208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:06:53.783242  678208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:06:53.783772  678208 out.go:368] Setting JSON to false
	I1101 10:06:53.784794  678208 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10151,"bootTime":1761981463,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:06:53.784916  678208 start.go:143] virtualization: kvm guest
	I1101 10:06:53.786922  678208 out.go:179] * [test-preload-619273] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:06:53.788061  678208 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:06:53.788100  678208 notify.go:221] Checking for updates...
	I1101 10:06:53.790195  678208 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:06:53.791279  678208 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:06:53.792534  678208 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:06:53.793627  678208 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:06:53.794666  678208 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:06:53.796176  678208 config.go:182] Loaded profile config "test-preload-619273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 10:06:53.797637  678208 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 10:06:53.798556  678208 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:06:53.823614  678208 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:06:53.823717  678208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:06:53.883758  678208 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-01 10:06:53.87275899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:06:53.883950  678208 docker.go:319] overlay module found
	I1101 10:06:53.885814  678208 out.go:179] * Using the docker driver based on existing profile
	I1101 10:06:53.886948  678208 start.go:309] selected driver: docker
	I1101 10:06:53.886966  678208 start.go:930] validating driver "docker" against &{Name:test-preload-619273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-619273 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:06:53.887110  678208 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:06:53.887812  678208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:06:53.952044  678208 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-01 10:06:53.940726601 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:06:53.952322  678208 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:06:53.952355  678208 cni.go:84] Creating CNI manager for ""
	I1101 10:06:53.952412  678208 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:06:53.952449  678208 start.go:353] cluster config:
	{Name:test-preload-619273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-619273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:06:53.954366  678208 out.go:179] * Starting "test-preload-619273" primary control-plane node in "test-preload-619273" cluster
	I1101 10:06:53.955546  678208 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:06:53.956613  678208 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:06:53.957569  678208 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 10:06:53.957605  678208 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:06:53.978641  678208 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:06:53.978672  678208 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:06:54.063966  678208 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 10:06:54.063997  678208 cache.go:59] Caching tarball of preloaded images
	I1101 10:06:54.064178  678208 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 10:06:54.065908  678208 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1101 10:06:54.067023  678208 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 10:06:54.199526  678208 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1101 10:06:54.199579  678208 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 10:07:05.052963  678208 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1101 10:07:05.053164  678208 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/config.json ...
	I1101 10:07:05.053423  678208 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:07:05.053493  678208 start.go:360] acquireMachinesLock for test-preload-619273: {Name:mk4e57fdf3e52c3d778738348eedaa73a0e90e07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:07:05.053599  678208 start.go:364] duration metric: took 72.656µs to acquireMachinesLock for "test-preload-619273"
	I1101 10:07:05.053622  678208 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:07:05.053631  678208 fix.go:54] fixHost starting: 
	I1101 10:07:05.053944  678208 cli_runner.go:164] Run: docker container inspect test-preload-619273 --format={{.State.Status}}
	I1101 10:07:05.071074  678208 fix.go:112] recreateIfNeeded on test-preload-619273: state=Stopped err=<nil>
	W1101 10:07:05.071126  678208 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:07:05.072825  678208 out.go:252] * Restarting existing docker container for "test-preload-619273" ...
	I1101 10:07:05.072922  678208 cli_runner.go:164] Run: docker start test-preload-619273
	I1101 10:07:05.307109  678208 cli_runner.go:164] Run: docker container inspect test-preload-619273 --format={{.State.Status}}
	I1101 10:07:05.325508  678208 kic.go:430] container "test-preload-619273" state is running.
	I1101 10:07:05.325907  678208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-619273
	I1101 10:07:05.344165  678208 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/config.json ...
	I1101 10:07:05.344467  678208 machine.go:94] provisionDockerMachine start ...
	I1101 10:07:05.344535  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:05.363000  678208 main.go:143] libmachine: Using SSH client type: native
	I1101 10:07:05.363243  678208 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 10:07:05.363256  678208 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:07:05.363900  678208 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52494->127.0.0.1:33078: read: connection reset by peer
	I1101 10:07:08.509009  678208 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-619273
	
	I1101 10:07:08.509034  678208 ubuntu.go:182] provisioning hostname "test-preload-619273"
	I1101 10:07:08.509090  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:08.526610  678208 main.go:143] libmachine: Using SSH client type: native
	I1101 10:07:08.526828  678208 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 10:07:08.526859  678208 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-619273 && echo "test-preload-619273" | sudo tee /etc/hostname
	I1101 10:07:08.676774  678208 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-619273
	
	I1101 10:07:08.676911  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:08.694592  678208 main.go:143] libmachine: Using SSH client type: native
	I1101 10:07:08.694937  678208 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 10:07:08.694967  678208 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-619273' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-619273/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-619273' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:07:08.836705  678208 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:07:08.836735  678208 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:07:08.836774  678208 ubuntu.go:190] setting up certificates
	I1101 10:07:08.836785  678208 provision.go:84] configureAuth start
	I1101 10:07:08.836861  678208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-619273
	I1101 10:07:08.854423  678208 provision.go:143] copyHostCerts
	I1101 10:07:08.854495  678208 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:07:08.854509  678208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:07:08.854596  678208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:07:08.854736  678208 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:07:08.854748  678208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:07:08.854787  678208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:07:08.854904  678208 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:07:08.854916  678208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:07:08.854952  678208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:07:08.855033  678208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.test-preload-619273 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-619273]
	I1101 10:07:08.905866  678208 provision.go:177] copyRemoteCerts
	I1101 10:07:08.905930  678208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:07:08.905968  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:08.923184  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:09.024638  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:07:09.043259  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 10:07:09.061897  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:07:09.080045  678208 provision.go:87] duration metric: took 243.244432ms to configureAuth
	I1101 10:07:09.080074  678208 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:07:09.080247  678208 config.go:182] Loaded profile config "test-preload-619273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 10:07:09.080353  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:09.098308  678208 main.go:143] libmachine: Using SSH client type: native
	I1101 10:07:09.098546  678208 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 10:07:09.098562  678208 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:07:09.382922  678208 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:07:09.382956  678208 machine.go:97] duration metric: took 4.038472851s to provisionDockerMachine
	I1101 10:07:09.382974  678208 start.go:293] postStartSetup for "test-preload-619273" (driver="docker")
	I1101 10:07:09.382989  678208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:07:09.383070  678208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:07:09.383132  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:09.401772  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:09.503038  678208 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:07:09.506658  678208 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:07:09.506685  678208 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:07:09.506696  678208 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:07:09.506755  678208 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:07:09.506896  678208 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:07:09.507021  678208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:07:09.514712  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:07:09.532604  678208 start.go:296] duration metric: took 149.610611ms for postStartSetup
	I1101 10:07:09.532690  678208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:07:09.532744  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:09.550924  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:09.648141  678208 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:07:09.652980  678208 fix.go:56] duration metric: took 4.599329909s for fixHost
	I1101 10:07:09.653015  678208 start.go:83] releasing machines lock for "test-preload-619273", held for 4.599400282s
	I1101 10:07:09.653104  678208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-619273
	I1101 10:07:09.670401  678208 ssh_runner.go:195] Run: cat /version.json
	I1101 10:07:09.670457  678208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:07:09.670476  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:09.670532  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:09.688493  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:09.689238  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:09.786601  678208 ssh_runner.go:195] Run: systemctl --version
	I1101 10:07:09.839752  678208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:07:09.876928  678208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:07:09.881829  678208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:07:09.881918  678208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:07:09.890845  678208 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:07:09.890870  678208 start.go:496] detecting cgroup driver to use...
	I1101 10:07:09.890913  678208 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:07:09.890961  678208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:07:09.906297  678208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:07:09.919764  678208 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:07:09.919826  678208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:07:09.935198  678208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:07:09.948206  678208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:07:10.027985  678208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:07:10.111487  678208 docker.go:234] disabling docker service ...
	I1101 10:07:10.111566  678208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:07:10.126542  678208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:07:10.139334  678208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:07:10.219026  678208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:07:10.295486  678208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:07:10.308138  678208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:07:10.322402  678208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1101 10:07:10.322457  678208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.331943  678208 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:07:10.332019  678208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.341717  678208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.350913  678208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.360222  678208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:07:10.368811  678208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.378222  678208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.387042  678208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:07:10.396131  678208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:07:10.403624  678208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:07:10.411276  678208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:07:10.491944  678208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:07:10.602665  678208 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:07:10.602748  678208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:07:10.606921  678208 start.go:564] Will wait 60s for crictl version
	I1101 10:07:10.606993  678208 ssh_runner.go:195] Run: which crictl
	I1101 10:07:10.610756  678208 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:07:10.636374  678208 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:07:10.636444  678208 ssh_runner.go:195] Run: crio --version
	I1101 10:07:10.665010  678208 ssh_runner.go:195] Run: crio --version
	I1101 10:07:10.695085  678208 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	I1101 10:07:10.696023  678208 cli_runner.go:164] Run: docker network inspect test-preload-619273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:07:10.713154  678208 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:07:10.717497  678208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:07:10.727814  678208 kubeadm.go:884] updating cluster {Name:test-preload-619273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-619273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:07:10.727939  678208 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 10:07:10.727991  678208 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:07:10.759366  678208 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:07:10.759388  678208 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:07:10.759448  678208 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:07:10.785677  678208 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:07:10.785698  678208 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:07:10.785706  678208 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1101 10:07:10.785812  678208 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-619273 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-619273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:07:10.785910  678208 ssh_runner.go:195] Run: crio config
	I1101 10:07:10.832881  678208 cni.go:84] Creating CNI manager for ""
	I1101 10:07:10.832904  678208 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:07:10.832924  678208 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:07:10.832949  678208 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-619273 NodeName:test-preload-619273 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:07:10.833094  678208 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-619273"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:07:10.833173  678208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1101 10:07:10.841561  678208 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:07:10.841638  678208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:07:10.849466  678208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1101 10:07:10.861969  678208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:07:10.874587  678208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 10:07:10.887547  678208 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:07:10.891505  678208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:07:10.901558  678208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:07:10.984647  678208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:07:11.008473  678208 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273 for IP: 192.168.76.2
	I1101 10:07:11.008501  678208 certs.go:195] generating shared ca certs ...
	I1101 10:07:11.008523  678208 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:07:11.008703  678208 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:07:11.008743  678208 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:07:11.008752  678208 certs.go:257] generating profile certs ...
	I1101 10:07:11.008894  678208 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/client.key
	I1101 10:07:11.008999  678208 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/apiserver.key.9e880539
	I1101 10:07:11.009065  678208 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/proxy-client.key
	I1101 10:07:11.009208  678208 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:07:11.009329  678208 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:07:11.009364  678208 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:07:11.009424  678208 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:07:11.009457  678208 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:07:11.009489  678208 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:07:11.009553  678208 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:07:11.010442  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:07:11.029854  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:07:11.049997  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:07:11.070403  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:07:11.095047  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:07:11.113522  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:07:11.131522  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:07:11.149808  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:07:11.169269  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:07:11.187407  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:07:11.207280  678208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:07:11.225260  678208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:07:11.238190  678208 ssh_runner.go:195] Run: openssl version
	I1101 10:07:11.244562  678208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:07:11.253418  678208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:07:11.257389  678208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:07:11.257450  678208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:07:11.292490  678208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:07:11.301097  678208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:07:11.310008  678208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:07:11.313961  678208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:07:11.314033  678208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:07:11.348349  678208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:07:11.357060  678208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:07:11.365819  678208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:07:11.369821  678208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:07:11.369903  678208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:07:11.404926  678208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:07:11.413480  678208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:07:11.417291  678208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:07:11.451256  678208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:07:11.485388  678208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:07:11.528497  678208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:07:11.570904  678208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:07:11.608436  678208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:07:11.642800  678208 kubeadm.go:401] StartCluster: {Name:test-preload-619273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-619273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:07:11.642928  678208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:07:11.643017  678208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:07:11.670887  678208 cri.go:89] found id: ""
	I1101 10:07:11.670955  678208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:07:11.679443  678208 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:07:11.679467  678208 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:07:11.679517  678208 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:07:11.687608  678208 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:07:11.688085  678208 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-619273" does not appear in /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:07:11.688206  678208 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-514161/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-619273" cluster setting kubeconfig missing "test-preload-619273" context setting]
	I1101 10:07:11.688534  678208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:07:11.689117  678208 kapi.go:59] client config for test-preload-619273: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/client.key", CAFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:07:11.689558  678208 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:07:11.689573  678208 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:07:11.689577  678208 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:07:11.689582  678208 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:07:11.689591  678208 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:07:11.690003  678208 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:07:11.698090  678208 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:07:11.698130  678208 kubeadm.go:602] duration metric: took 18.656615ms to restartPrimaryControlPlane
	I1101 10:07:11.698143  678208 kubeadm.go:403] duration metric: took 55.355271ms to StartCluster
	I1101 10:07:11.698167  678208 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:07:11.698249  678208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:07:11.698832  678208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:07:11.699134  678208 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:07:11.699202  678208 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:07:11.699313  678208 addons.go:70] Setting storage-provisioner=true in profile "test-preload-619273"
	I1101 10:07:11.699331  678208 addons.go:239] Setting addon storage-provisioner=true in "test-preload-619273"
	W1101 10:07:11.699340  678208 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:07:11.699340  678208 addons.go:70] Setting default-storageclass=true in profile "test-preload-619273"
	I1101 10:07:11.699362  678208 config.go:182] Loaded profile config "test-preload-619273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 10:07:11.699374  678208 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-619273"
	I1101 10:07:11.699398  678208 host.go:66] Checking if "test-preload-619273" exists ...
	I1101 10:07:11.699688  678208 cli_runner.go:164] Run: docker container inspect test-preload-619273 --format={{.State.Status}}
	I1101 10:07:11.699946  678208 cli_runner.go:164] Run: docker container inspect test-preload-619273 --format={{.State.Status}}
	I1101 10:07:11.702218  678208 out.go:179] * Verifying Kubernetes components...
	I1101 10:07:11.703125  678208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:07:11.719075  678208 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:07:11.719690  678208 kapi.go:59] client config for test-preload-619273: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/test-preload-619273/client.key", CAFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:07:11.719985  678208 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:07:11.720009  678208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:07:11.720011  678208 addons.go:239] Setting addon default-storageclass=true in "test-preload-619273"
	W1101 10:07:11.720026  678208 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:07:11.720069  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:11.720071  678208 host.go:66] Checking if "test-preload-619273" exists ...
	I1101 10:07:11.720530  678208 cli_runner.go:164] Run: docker container inspect test-preload-619273 --format={{.State.Status}}
	I1101 10:07:11.745578  678208 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:07:11.745606  678208 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:07:11.745662  678208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-619273
	I1101 10:07:11.747048  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:11.764968  678208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/test-preload-619273/id_rsa Username:docker}
	I1101 10:07:11.800620  678208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:07:11.813769  678208 node_ready.go:35] waiting up to 6m0s for node "test-preload-619273" to be "Ready" ...
	I1101 10:07:11.856969  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:07:11.871008  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:11.914390  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:11.914429  678208 retry.go:31] will retry after 155.027362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:11.928916  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:11.928955  678208 retry.go:31] will retry after 224.636284ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.070242  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:12.129032  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.129074  678208 retry.go:31] will retry after 332.35463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.154269  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:12.210812  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.210868  678208 retry.go:31] will retry after 389.142155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.461743  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:12.518473  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.518506  678208 retry.go:31] will retry after 352.10689ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.600727  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:12.656555  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.656597  678208 retry.go:31] will retry after 505.543838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.871474  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:12.929241  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:12.929283  678208 retry.go:31] will retry after 539.500119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:13.162674  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:13.220430  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:13.220465  678208 retry.go:31] will retry after 1.187425433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:13.469632  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:13.523809  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:13.523856  678208 retry.go:31] will retry after 676.936779ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:13.814864  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:14.201007  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:14.256739  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:14.256779  678208 retry.go:31] will retry after 1.777959804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:14.408568  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:14.465337  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:14.465368  678208 retry.go:31] will retry after 1.27465633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:15.740882  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:15.797872  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:15.797909  678208 retry.go:31] will retry after 2.563851359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:16.035865  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:16.092438  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:16.092484  678208 retry.go:31] will retry after 3.800152361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:16.315459  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:18.362624  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:18.420704  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:18.420739  678208 retry.go:31] will retry after 3.080458291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:18.814805  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:19.893010  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:19.948466  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:19.948507  678208 retry.go:31] will retry after 2.923862855s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:21.314396  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:21.501701  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:21.557131  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:21.557162  678208 retry.go:31] will retry after 4.639844377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:22.873459  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:22.929444  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:22.929495  678208 retry.go:31] will retry after 4.274933586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:23.315421  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:25.815363  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:26.197853  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:26.253422  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:26.253456  678208 retry.go:31] will retry after 4.893385712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:27.204648  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:27.264260  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:27.264294  678208 retry.go:31] will retry after 6.27111665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:28.314810  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:30.315161  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:31.147641  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:31.203317  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:31.203354  678208 retry.go:31] will retry after 6.884778373s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:32.315361  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:33.535630  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:33.590863  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:33.590900  678208 retry.go:31] will retry after 7.398744418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:34.814429  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:37.314434  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:38.088709  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:38.145795  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:38.145855  678208 retry.go:31] will retry after 11.044520788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:39.314881  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:40.990619  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:41.045055  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:41.045087  678208 retry.go:31] will retry after 18.291698485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:41.315036  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:43.814525  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:45.815370  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:48.315344  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:49.190901  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:07:49.247744  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:49.247786  678208 retry.go:31] will retry after 25.044173414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:07:50.815343  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:53.315354  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:55.815225  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:07:57.815347  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:07:59.337741  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:07:59.394029  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:07:59.394064  678208 retry.go:31] will retry after 46.12680889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:08:00.314953  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:02.315177  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:04.814364  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:07.314401  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:09.314700  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:11.814541  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:08:14.293142  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:08:14.315222  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:14.350703  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:08:14.350742  678208 retry.go:31] will retry after 25.407527422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:08:16.814501  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:18.814613  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:20.814680  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:22.814886  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:25.314567  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:27.314699  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:29.315035  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:31.315381  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:33.814680  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:35.815385  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:37.815466  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:08:39.759229  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:08:39.816604  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:08:39.816635  678208 retry.go:31] will retry after 26.467019386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:08:40.314730  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:42.814625  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:45.315332  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:08:45.521613  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 10:08:45.581362  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:08:45.581519  678208 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1101 10:08:47.315373  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:49.814403  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:51.814480  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:53.814631  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:56.314461  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:08:58.814535  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:01.314384  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:03.814602  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:09:06.283933  678208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 10:09:06.314993  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:06.341330  678208 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:09:06.341466  678208 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 10:09:06.343274  678208 out.go:179] * Enabled addons: 
	I1101 10:09:06.344450  678208 addons.go:515] duration metric: took 1m54.645259125s for enable addons: enabled=[]
	W1101 10:09:08.814536  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:11.315352  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:13.315456  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:15.814621  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:18.314559  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:20.314783  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:22.315093  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:24.814352  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:27.315415  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:29.814573  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:31.814744  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:33.814934  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:35.815561  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:38.314595  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:40.314755  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:42.315113  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:44.814520  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:47.314397  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:49.314746  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:51.314825  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:53.315136  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:55.814531  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:57.814636  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:09:59.814792  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:01.815254  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:04.314644  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:06.814625  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:08.814979  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:10.815292  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:13.314460  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:15.315110  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:17.814582  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:19.815171  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:22.314760  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:24.314884  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:26.814780  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:28.814909  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:31.314925  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:33.814818  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:35.814974  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:37.815243  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:40.314468  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:42.314522  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:44.314922  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:46.315145  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:48.814930  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:51.314387  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:53.314470  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:55.314678  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:10:57.814571  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:00.314482  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:02.814541  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:04.814707  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:07.314539  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:09.315301  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:11.815244  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:14.314423  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:16.814443  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:18.814755  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:21.314484  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:23.314717  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:25.814443  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:28.314485  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:30.314753  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:32.315012  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:34.815426  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:37.314433  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:39.314735  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:41.814984  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:44.314568  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:46.814458  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:48.814746  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:51.314355  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:53.314402  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:55.315365  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:11:57.815349  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:00.315319  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:02.814380  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:04.814525  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:06.815341  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:09.314582  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:11.814478  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:13.814589  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:16.314583  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:18.814664  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:21.314434  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:23.814513  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:26.314408  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:28.315381  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:30.814480  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:33.314490  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:35.314586  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:37.814372  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:40.315384  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:42.814760  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:44.815296  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:47.314546  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:49.314933  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:51.314986  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:53.814770  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:55.815093  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:12:58.315491  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:13:00.815059  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:13:03.314608  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:13:05.315224  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:13:07.315363  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 10:13:09.814631  678208 node_ready.go:55] error getting node "test-preload-619273" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-619273": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 10:13:11.814122  678208 node_ready.go:38] duration metric: took 6m0.000298574s for node "test-preload-619273" to be "Ready" ...
	I1101 10:13:11.815935  678208 out.go:203] 
	W1101 10:13:11.816966  678208 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1101 10:13:11.816982  678208 out.go:285] * 
	W1101 10:13:11.818865  678208 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:13:11.819984  678208 out.go:203] 
	
	
	==> CRI-O <==
	Nov 01 10:08:37 test-preload-619273 crio[553]: time="2025-11-01T10:08:37.109442346Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/92e32d10577bb054e8e7d126386354d0eb8b69dc934a7b0e7a10c05f4b5cfbc6/merged\": directory not empty" id=8769005e-c94d-4af2-a351-51e908818413 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:09:20 test-preload-619273 crio[553]: time="2025-11-01T10:09:20.357137866Z" level=info msg="createCtr: deleting container 0d1c9450ab32be1286f54d15346a84a15417a56c600cd92782e9f27dee1d1f4f from storage" id=19a97582-1d87-467f-995f-147b50081aaf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:09:20 test-preload-619273 crio[553]: time="2025-11-01T10:09:20.357171059Z" level=info msg="createCtr: deleting container fa7b7645e00633fb11eef7a997e420b64679284a6a14e0c4096b48b7889ab5e7 from storage" id=23898a5f-abd6-48b8-82cc-83d7535e7f9d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:09:20 test-preload-619273 crio[553]: time="2025-11-01T10:09:20.357616611Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/c60ec4a99c10c3f5c1589e73d543854bf78b5461f8a3079d40c2636d58499f31/merged\": directory not empty" id=19a97582-1d87-467f-995f-147b50081aaf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:09:20 test-preload-619273 crio[553]: time="2025-11-01T10:09:20.357808058Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/2974ea533b1af9dcc4aaad40bcf5a455f622616b108a51cb5170744c63c51d7f/merged\": directory not empty" id=23898a5f-abd6-48b8-82cc-83d7535e7f9d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:09:20 test-preload-619273 crio[553]: time="2025-11-01T10:09:20.358278861Z" level=info msg="createCtr: deleting container a79a588877bc373e8d685e4df61dbfc4a529a21154e7de4af01a045f14a5bfc1 from storage" id=c7e4b86b-716f-4565-9230-321d6dc4b853 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:09:20 test-preload-619273 crio[553]: time="2025-11-01T10:09:20.35831814Z" level=info msg="createCtr: deleting container bbcf33560621ee1efe507ab79102b36441b73544eda25b660f63c62edecd47ed from storage" id=8769005e-c94d-4af2-a351-51e908818413 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:09:20 test-preload-619273 crio[553]: time="2025-11-01T10:09:20.358678059Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/92e32d10577bb054e8e7d126386354d0eb8b69dc934a7b0e7a10c05f4b5cfbc6/merged\": directory not empty" id=8769005e-c94d-4af2-a351-51e908818413 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:09:20 test-preload-619273 crio[553]: time="2025-11-01T10:09:20.358869186Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/993d9a22a122817d406e1d5e7b47e24761a6ab87353e1d9c92033f8d2e27ce18/merged\": directory not empty" id=c7e4b86b-716f-4565-9230-321d6dc4b853 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:10:25 test-preload-619273 crio[553]: time="2025-11-01T10:10:25.231263169Z" level=info msg="createCtr: deleting container fa7b7645e00633fb11eef7a997e420b64679284a6a14e0c4096b48b7889ab5e7 from storage" id=23898a5f-abd6-48b8-82cc-83d7535e7f9d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:10:25 test-preload-619273 crio[553]: time="2025-11-01T10:10:25.23132154Z" level=info msg="createCtr: deleting container 0d1c9450ab32be1286f54d15346a84a15417a56c600cd92782e9f27dee1d1f4f from storage" id=19a97582-1d87-467f-995f-147b50081aaf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:10:25 test-preload-619273 crio[553]: time="2025-11-01T10:10:25.231850911Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/2974ea533b1af9dcc4aaad40bcf5a455f622616b108a51cb5170744c63c51d7f/merged\": directory not empty" id=23898a5f-abd6-48b8-82cc-83d7535e7f9d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:10:25 test-preload-619273 crio[553]: time="2025-11-01T10:10:25.231920692Z" level=info msg="createCtr: deleting container bbcf33560621ee1efe507ab79102b36441b73544eda25b660f63c62edecd47ed from storage" id=8769005e-c94d-4af2-a351-51e908818413 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:10:25 test-preload-619273 crio[553]: time="2025-11-01T10:10:25.23202297Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/c60ec4a99c10c3f5c1589e73d543854bf78b5461f8a3079d40c2636d58499f31/merged\": directory not empty" id=19a97582-1d87-467f-995f-147b50081aaf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:10:25 test-preload-619273 crio[553]: time="2025-11-01T10:10:25.23227528Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/92e32d10577bb054e8e7d126386354d0eb8b69dc934a7b0e7a10c05f4b5cfbc6/merged\": directory not empty" id=8769005e-c94d-4af2-a351-51e908818413 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:10:25 test-preload-619273 crio[553]: time="2025-11-01T10:10:25.232389404Z" level=info msg="createCtr: deleting container a79a588877bc373e8d685e4df61dbfc4a529a21154e7de4af01a045f14a5bfc1 from storage" id=c7e4b86b-716f-4565-9230-321d6dc4b853 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:10:25 test-preload-619273 crio[553]: time="2025-11-01T10:10:25.232560736Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/993d9a22a122817d406e1d5e7b47e24761a6ab87353e1d9c92033f8d2e27ce18/merged\": directory not empty" id=c7e4b86b-716f-4565-9230-321d6dc4b853 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:12:02 test-preload-619273 crio[553]: time="2025-11-01T10:12:02.542691219Z" level=info msg="createCtr: deleting container a79a588877bc373e8d685e4df61dbfc4a529a21154e7de4af01a045f14a5bfc1 from storage" id=c7e4b86b-716f-4565-9230-321d6dc4b853 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:12:02 test-preload-619273 crio[553]: time="2025-11-01T10:12:02.542787182Z" level=info msg="createCtr: deleting container fa7b7645e00633fb11eef7a997e420b64679284a6a14e0c4096b48b7889ab5e7 from storage" id=23898a5f-abd6-48b8-82cc-83d7535e7f9d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:12:02 test-preload-619273 crio[553]: time="2025-11-01T10:12:02.542719019Z" level=info msg="createCtr: deleting container 0d1c9450ab32be1286f54d15346a84a15417a56c600cd92782e9f27dee1d1f4f from storage" id=19a97582-1d87-467f-995f-147b50081aaf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:12:02 test-preload-619273 crio[553]: time="2025-11-01T10:12:02.542766077Z" level=info msg="createCtr: deleting container bbcf33560621ee1efe507ab79102b36441b73544eda25b660f63c62edecd47ed from storage" id=8769005e-c94d-4af2-a351-51e908818413 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:12:02 test-preload-619273 crio[553]: time="2025-11-01T10:12:02.543139779Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/993d9a22a122817d406e1d5e7b47e24761a6ab87353e1d9c92033f8d2e27ce18/merged\": directory not empty" id=c7e4b86b-716f-4565-9230-321d6dc4b853 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:12:02 test-preload-619273 crio[553]: time="2025-11-01T10:12:02.54350009Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/c60ec4a99c10c3f5c1589e73d543854bf78b5461f8a3079d40c2636d58499f31/merged\": directory not empty" id=19a97582-1d87-467f-995f-147b50081aaf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:12:02 test-preload-619273 crio[553]: time="2025-11-01T10:12:02.543670307Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/92e32d10577bb054e8e7d126386354d0eb8b69dc934a7b0e7a10c05f4b5cfbc6/merged\": directory not empty" id=8769005e-c94d-4af2-a351-51e908818413 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:12:02 test-preload-619273 crio[553]: time="2025-11-01T10:12:02.54389665Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/2974ea533b1af9dcc4aaad40bcf5a455f622616b108a51cb5170744c63c51d7f/merged\": directory not empty" id=23898a5f-abd6-48b8-82cc-83d7535e7f9d name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> kernel <==
	 10:13:12 up  2:55,  0 user,  load average: 0.00, 0.37, 1.78
	Linux test-preload-619273 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 01 10:12:41 test-preload-619273 kubelet[713]: W1101 10:12:41.066289     713 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Nov 01 10:12:41 test-preload-619273 kubelet[713]: E1101 10:12:41.066374     713 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Nov 01 10:12:41 test-preload-619273 kubelet[713]: E1101 10:12:41.115717     713 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-619273\" not found"
	Nov 01 10:12:45 test-preload-619273 kubelet[713]: E1101 10:12:45.458044     713 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-619273.1873da0d9441e057  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-619273,UID:test-preload-619273,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node test-preload-619273 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:test-preload-619273,},FirstTimestamp:2025-11-01 10:07:11.088771159 +0000 UTC m=+0.078653522,LastTimestamp:2025-11-01 10:07:11.088771159 +0000 UTC m=+0.078653522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance
:test-preload-619273,}"
	Nov 01 10:12:45 test-preload-619273 kubelet[713]: E1101 10:12:45.736424     713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-619273?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Nov 01 10:12:45 test-preload-619273 kubelet[713]: I1101 10:12:45.909274     713 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-619273"
	Nov 01 10:12:45 test-preload-619273 kubelet[713]: E1101 10:12:45.909717     713 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-619273"
	Nov 01 10:12:51 test-preload-619273 kubelet[713]: E1101 10:12:51.116762     713 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-619273\" not found"
	Nov 01 10:12:52 test-preload-619273 kubelet[713]: E1101 10:12:52.738007     713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-619273?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Nov 01 10:12:52 test-preload-619273 kubelet[713]: I1101 10:12:52.911225     713 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-619273"
	Nov 01 10:12:52 test-preload-619273 kubelet[713]: E1101 10:12:52.911656     713 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-619273"
	Nov 01 10:12:55 test-preload-619273 kubelet[713]: E1101 10:12:55.459212     713 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-619273.1873da0d9441e057  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-619273,UID:test-preload-619273,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node test-preload-619273 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:test-preload-619273,},FirstTimestamp:2025-11-01 10:07:11.088771159 +0000 UTC m=+0.078653522,LastTimestamp:2025-11-01 10:07:11.088771159 +0000 UTC m=+0.078653522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance
:test-preload-619273,}"
	Nov 01 10:12:59 test-preload-619273 kubelet[713]: E1101 10:12:59.739442     713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-619273?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Nov 01 10:12:59 test-preload-619273 kubelet[713]: I1101 10:12:59.912890     713 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-619273"
	Nov 01 10:12:59 test-preload-619273 kubelet[713]: E1101 10:12:59.913367     713 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-619273"
	Nov 01 10:13:01 test-preload-619273 kubelet[713]: E1101 10:13:01.116995     713 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-619273\" not found"
	Nov 01 10:13:05 test-preload-619273 kubelet[713]: E1101 10:13:05.460369     713 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-619273.1873da0d9441e057  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-619273,UID:test-preload-619273,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node test-preload-619273 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:test-preload-619273,},FirstTimestamp:2025-11-01 10:07:11.088771159 +0000 UTC m=+0.078653522,LastTimestamp:2025-11-01 10:07:11.088771159 +0000 UTC m=+0.078653522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance
:test-preload-619273,}"
	Nov 01 10:13:06 test-preload-619273 kubelet[713]: E1101 10:13:06.740630     713 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-619273?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Nov 01 10:13:06 test-preload-619273 kubelet[713]: I1101 10:13:06.914827     713 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-619273"
	Nov 01 10:13:06 test-preload-619273 kubelet[713]: E1101 10:13:06.915222     713 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-619273"
	Nov 01 10:13:07 test-preload-619273 kubelet[713]: W1101 10:13:07.596989     713 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Nov 01 10:13:07 test-preload-619273 kubelet[713]: E1101 10:13:07.597074     713 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Nov 01 10:13:07 test-preload-619273 kubelet[713]: W1101 10:13:07.745712     713 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dtest-preload-619273&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Nov 01 10:13:07 test-preload-619273 kubelet[713]: E1101 10:13:07.745792     713 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dtest-preload-619273&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Nov 01 10:13:11 test-preload-619273 kubelet[713]: E1101 10:13:11.118115     713 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-619273\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-619273 -n test-preload-619273
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-619273 -n test-preload-619273: exit status 2 (310.985753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "test-preload-619273" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-619273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-619273
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-619273: (2.416575713s)
--- FAIL: TestPreload (437.68s)

                                                
                                    
x
+
TestPause/serial/Pause (6.18s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-297661 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-297661 --alsologtostderr -v=5: exit status 80 (1.879489345s)

                                                
                                                
-- stdout --
	* Pausing node pause-297661 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:16:14.166657  705104 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:16:14.166961  705104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:14.166973  705104 out.go:374] Setting ErrFile to fd 2...
	I1101 10:16:14.166980  705104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:14.167232  705104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:16:14.167539  705104 out.go:368] Setting JSON to false
	I1101 10:16:14.167600  705104 mustload.go:66] Loading cluster: pause-297661
	I1101 10:16:14.168000  705104 config.go:182] Loaded profile config "pause-297661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:16:14.168448  705104 cli_runner.go:164] Run: docker container inspect pause-297661 --format={{.State.Status}}
	I1101 10:16:14.188145  705104 host.go:66] Checking if "pause-297661" exists ...
	I1101 10:16:14.188449  705104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:16:14.251779  705104 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:16:14.240757186 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:16:14.252421  705104 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-297661 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:16:14.254545  705104 out.go:179] * Pausing node pause-297661 ... 
	I1101 10:16:14.255486  705104 host.go:66] Checking if "pause-297661" exists ...
	I1101 10:16:14.255816  705104 ssh_runner.go:195] Run: systemctl --version
	I1101 10:16:14.255881  705104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:14.275742  705104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/pause-297661/id_rsa Username:docker}
	I1101 10:16:14.377375  705104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:16:14.413062  705104 pause.go:52] kubelet running: true
	I1101 10:16:14.413135  705104 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:16:14.586428  705104 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:16:14.586596  705104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:16:14.690401  705104 cri.go:89] found id: "540e6f288254c2f91c0b576e675ab75f176f33dc04857cd29478b2be023c0967"
	I1101 10:16:14.690432  705104 cri.go:89] found id: "ad61b10f8e140aeb0af6fd55e782e028e92c86d23d31f34a996fe6bee23d45e7"
	I1101 10:16:14.690438  705104 cri.go:89] found id: "11a7d411789fa6a12c87e30dddaad6f06e2d9ee1da69d65d8156525d726e8342"
	I1101 10:16:14.690443  705104 cri.go:89] found id: "0bd1538ac2657af6c6a5e8f373e61727a3b6a24642d5fc1bb8689a6cd54bc641"
	I1101 10:16:14.690447  705104 cri.go:89] found id: "24e09344febf421139bbbdae8d663120c3c223b397b6fa22e35806255e5a549b"
	I1101 10:16:14.690451  705104 cri.go:89] found id: "472cb4bf17c605290e55b8041352682602fbd3184fdcf7ae902cf8466aacac4c"
	I1101 10:16:14.690455  705104 cri.go:89] found id: "4cf89bdef43bcb6a8880f0173eb19d34c955c26650e304b2d61776b18a9f36c3"
	I1101 10:16:14.690458  705104 cri.go:89] found id: ""
	I1101 10:16:14.690512  705104 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:16:14.706670  705104 retry.go:31] will retry after 288.963108ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:16:14Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:16:14.995976  705104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:16:15.020749  705104 pause.go:52] kubelet running: false
	I1101 10:16:15.020819  705104 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:16:15.178682  705104 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:16:15.178776  705104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:16:15.258192  705104 cri.go:89] found id: "540e6f288254c2f91c0b576e675ab75f176f33dc04857cd29478b2be023c0967"
	I1101 10:16:15.258218  705104 cri.go:89] found id: "ad61b10f8e140aeb0af6fd55e782e028e92c86d23d31f34a996fe6bee23d45e7"
	I1101 10:16:15.258224  705104 cri.go:89] found id: "11a7d411789fa6a12c87e30dddaad6f06e2d9ee1da69d65d8156525d726e8342"
	I1101 10:16:15.258228  705104 cri.go:89] found id: "0bd1538ac2657af6c6a5e8f373e61727a3b6a24642d5fc1bb8689a6cd54bc641"
	I1101 10:16:15.258232  705104 cri.go:89] found id: "24e09344febf421139bbbdae8d663120c3c223b397b6fa22e35806255e5a549b"
	I1101 10:16:15.258236  705104 cri.go:89] found id: "472cb4bf17c605290e55b8041352682602fbd3184fdcf7ae902cf8466aacac4c"
	I1101 10:16:15.258241  705104 cri.go:89] found id: "4cf89bdef43bcb6a8880f0173eb19d34c955c26650e304b2d61776b18a9f36c3"
	I1101 10:16:15.258244  705104 cri.go:89] found id: ""
	I1101 10:16:15.258293  705104 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:16:15.271457  705104 retry.go:31] will retry after 382.397145ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:16:15Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:16:15.655067  705104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:16:15.675437  705104 pause.go:52] kubelet running: false
	I1101 10:16:15.675519  705104 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:16:15.853723  705104 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:16:15.853942  705104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:16:15.953367  705104 cri.go:89] found id: "540e6f288254c2f91c0b576e675ab75f176f33dc04857cd29478b2be023c0967"
	I1101 10:16:15.953398  705104 cri.go:89] found id: "ad61b10f8e140aeb0af6fd55e782e028e92c86d23d31f34a996fe6bee23d45e7"
	I1101 10:16:15.953404  705104 cri.go:89] found id: "11a7d411789fa6a12c87e30dddaad6f06e2d9ee1da69d65d8156525d726e8342"
	I1101 10:16:15.953409  705104 cri.go:89] found id: "0bd1538ac2657af6c6a5e8f373e61727a3b6a24642d5fc1bb8689a6cd54bc641"
	I1101 10:16:15.953414  705104 cri.go:89] found id: "24e09344febf421139bbbdae8d663120c3c223b397b6fa22e35806255e5a549b"
	I1101 10:16:15.953418  705104 cri.go:89] found id: "472cb4bf17c605290e55b8041352682602fbd3184fdcf7ae902cf8466aacac4c"
	I1101 10:16:15.953422  705104 cri.go:89] found id: "4cf89bdef43bcb6a8880f0173eb19d34c955c26650e304b2d61776b18a9f36c3"
	I1101 10:16:15.953426  705104 cri.go:89] found id: ""
	I1101 10:16:15.953479  705104 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:16:15.969558  705104 out.go:203] 
	W1101 10:16:15.970726  705104 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:16:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:16:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:16:15.970754  705104 out.go:285] * 
	* 
	W1101 10:16:15.975218  705104 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:16:15.976583  705104 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-297661 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-297661
helpers_test.go:243: (dbg) docker inspect pause-297661:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90",
	        "Created": "2025-11-01T10:15:13.339733244Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 689939,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:15:13.417600707Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90/hostname",
	        "HostsPath": "/var/lib/docker/containers/f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90/hosts",
	        "LogPath": "/var/lib/docker/containers/f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90/f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90-json.log",
	        "Name": "/pause-297661",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-297661:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-297661",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90",
	                "LowerDir": "/var/lib/docker/overlay2/313b7c587eb9ab28ab9a9c5d9821c3876d2c9e40813fd4886b498b4cecc1f623-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/313b7c587eb9ab28ab9a9c5d9821c3876d2c9e40813fd4886b498b4cecc1f623/merged",
	                "UpperDir": "/var/lib/docker/overlay2/313b7c587eb9ab28ab9a9c5d9821c3876d2c9e40813fd4886b498b4cecc1f623/diff",
	                "WorkDir": "/var/lib/docker/overlay2/313b7c587eb9ab28ab9a9c5d9821c3876d2c9e40813fd4886b498b4cecc1f623/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-297661",
	                "Source": "/var/lib/docker/volumes/pause-297661/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-297661",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-297661",
	                "name.minikube.sigs.k8s.io": "pause-297661",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6429c075855c32480c1084a5d9e66d68c1e469a3cf9074b8dcfd4934cf5211bc",
	            "SandboxKey": "/var/run/docker/netns/6429c075855c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-297661": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:c1:a8:df:7c:b8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5efbbe29eca3cfcfada3bb9d99b9f97315c4248dc80ea0279fc1c930d5dd1b99",
	                    "EndpointID": "51b7b788e0002ced01b2b1e9614f8fd65f8a66159065a411c9943645fd6a8a2d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-297661",
	                        "f9246503bec0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-297661 -n pause-297661
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-297661 -n pause-297661: exit status 2 (398.655576ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-297661 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-297661 logs -n 25: (1.208178525s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-473081 --memory=3072 --driver=docker  --container-runtime=crio                                  │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │ 01 Nov 25 10:13 UTC │
	│ stop    │ -p scheduled-stop-473081 --schedule 5m                                                                            │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 5m                                                                            │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 5m                                                                            │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 15s                                                                           │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 15s                                                                           │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 15s                                                                           │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --cancel-scheduled                                                                       │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │ 01 Nov 25 10:13 UTC │
	│ stop    │ -p scheduled-stop-473081 --schedule 15s                                                                           │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 15s                                                                           │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 15s                                                                           │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ delete  │ -p scheduled-stop-473081                                                                                          │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ start   │ -p insufficient-storage-500399 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio  │ insufficient-storage-500399 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ delete  │ -p insufficient-storage-500399                                                                                    │ insufficient-storage-500399 │ jenkins │ v1.37.0 │ 01 Nov 25 10:15 UTC │ 01 Nov 25 10:15 UTC │
	│ start   │ -p offline-crio-286433 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ offline-crio-286433         │ jenkins │ v1.37.0 │ 01 Nov 25 10:15 UTC │ 01 Nov 25 10:15 UTC │
	│ start   │ -p pause-297661 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio         │ pause-297661                │ jenkins │ v1.37.0 │ 01 Nov 25 10:15 UTC │ 01 Nov 25 10:16 UTC │
	│ start   │ -p stopped-upgrade-333944 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ stopped-upgrade-333944      │ jenkins │ v1.32.0 │ 01 Nov 25 10:15 UTC │ 01 Nov 25 10:16 UTC │
	│ start   │ -p missing-upgrade-489499 --memory=3072 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-489499      │ jenkins │ v1.32.0 │ 01 Nov 25 10:15 UTC │ 01 Nov 25 10:16 UTC │
	│ delete  │ -p offline-crio-286433                                                                                            │ offline-crio-286433         │ jenkins │ v1.37.0 │ 01 Nov 25 10:15 UTC │ 01 Nov 25 10:15 UTC │
	│ start   │ -p running-upgrade-821146 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ running-upgrade-821146      │ jenkins │ v1.32.0 │ 01 Nov 25 10:15 UTC │                     │
	│ stop    │ stopped-upgrade-333944 stop                                                                                       │ stopped-upgrade-333944      │ jenkins │ v1.32.0 │ 01 Nov 25 10:16 UTC │ 01 Nov 25 10:16 UTC │
	│ start   │ -p missing-upgrade-489499 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ missing-upgrade-489499      │ jenkins │ v1.37.0 │ 01 Nov 25 10:16 UTC │                     │
	│ start   │ -p stopped-upgrade-333944 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ stopped-upgrade-333944      │ jenkins │ v1.37.0 │ 01 Nov 25 10:16 UTC │                     │
	│ start   │ -p pause-297661 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                  │ pause-297661                │ jenkins │ v1.37.0 │ 01 Nov 25 10:16 UTC │ 01 Nov 25 10:16 UTC │
	│ pause   │ -p pause-297661 --alsologtostderr -v=5                                                                            │ pause-297661                │ jenkins │ v1.37.0 │ 01 Nov 25 10:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:16:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:16:07.596320  702757 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:16:07.596641  702757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:07.596652  702757 out.go:374] Setting ErrFile to fd 2...
	I1101 10:16:07.596659  702757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:07.596897  702757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:16:07.597379  702757 out.go:368] Setting JSON to false
	I1101 10:16:07.598543  702757 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10705,"bootTime":1761981463,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:16:07.598657  702757 start.go:143] virtualization: kvm guest
	I1101 10:16:07.600350  702757 out.go:179] * [pause-297661] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:16:07.601611  702757 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:16:07.601655  702757 notify.go:221] Checking for updates...
	I1101 10:16:07.603465  702757 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:16:07.604468  702757 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:16:07.605401  702757 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:16:07.606437  702757 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:16:07.607465  702757 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:16:07.608953  702757 config.go:182] Loaded profile config "pause-297661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:16:07.609487  702757 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:16:07.637761  702757 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:16:07.637960  702757 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:16:07.705603  702757 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 10:16:07.694412155 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:16:07.705715  702757 docker.go:319] overlay module found
	I1101 10:16:07.707281  702757 out.go:179] * Using the docker driver based on existing profile
	I1101 10:16:07.708309  702757 start.go:309] selected driver: docker
	I1101 10:16:07.708329  702757 start.go:930] validating driver "docker" against &{Name:pause-297661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-297661 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:16:07.708468  702757 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:16:07.708552  702757 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:16:07.779796  702757 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 10:16:07.768562943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:16:07.780588  702757 cni.go:84] Creating CNI manager for ""
	I1101 10:16:07.780660  702757 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:16:07.780706  702757 start.go:353] cluster config:
	{Name:pause-297661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-297661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:16:07.782237  702757 out.go:179] * Starting "pause-297661" primary control-plane node in "pause-297661" cluster
	I1101 10:16:07.783152  702757 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:16:07.784152  702757 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:16:07.785060  702757 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:16:07.785123  702757 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:16:07.785138  702757 cache.go:59] Caching tarball of preloaded images
	I1101 10:16:07.785158  702757 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:16:07.785243  702757 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:16:07.785260  702757 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:16:07.785447  702757 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/config.json ...
	I1101 10:16:07.808195  702757 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:16:07.808218  702757 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:16:07.808241  702757 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:16:07.808282  702757 start.go:360] acquireMachinesLock for pause-297661: {Name:mk059299f77c9dd6878046d3e145d080b4a2defd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:16:07.808366  702757 start.go:364] duration metric: took 47.267µs to acquireMachinesLock for "pause-297661"
	I1101 10:16:07.808390  702757 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:16:07.808401  702757 fix.go:54] fixHost starting: 
	I1101 10:16:07.808670  702757 cli_runner.go:164] Run: docker container inspect pause-297661 --format={{.State.Status}}
	I1101 10:16:07.827583  702757 fix.go:112] recreateIfNeeded on pause-297661: state=Running err=<nil>
	W1101 10:16:07.827624  702757 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:16:04.804748  699371 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v running-upgrade-821146:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.526518528s)
	I1101 10:16:04.804789  699371 kic.go:203] duration metric: took 5.526788 seconds to extract preloaded images to volume
	W1101 10:16:04.804922  699371 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 10:16:04.804963  699371 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 10:16:04.805012  699371 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:16:04.868007  699371 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-821146 --name running-upgrade-821146 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-821146 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-821146 --network running-upgrade-821146 --ip 192.168.85.2 --volume running-upgrade-821146:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1101 10:16:05.178886  699371 cli_runner.go:164] Run: docker container inspect running-upgrade-821146 --format={{.State.Running}}
	I1101 10:16:05.211284  699371 cli_runner.go:164] Run: docker container inspect running-upgrade-821146 --format={{.State.Status}}
	I1101 10:16:05.244553  699371 cli_runner.go:164] Run: docker exec running-upgrade-821146 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:16:05.318203  699371 oci.go:144] the created container "running-upgrade-821146" has a running status.
	I1101 10:16:05.318231  699371 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa...
	I1101 10:16:05.628004  699371 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:16:05.657472  699371 cli_runner.go:164] Run: docker container inspect running-upgrade-821146 --format={{.State.Status}}
	I1101 10:16:05.678809  699371 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:16:05.678823  699371 kic_runner.go:114] Args: [docker exec --privileged running-upgrade-821146 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:16:05.733288  699371 cli_runner.go:164] Run: docker container inspect running-upgrade-821146 --format={{.State.Status}}
	I1101 10:16:05.755728  699371 machine.go:88] provisioning docker machine ...
	I1101 10:16:05.755771  699371 ubuntu.go:169] provisioning hostname "running-upgrade-821146"
	I1101 10:16:05.755876  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:05.781379  699371 main.go:141] libmachine: Using SSH client type: native
	I1101 10:16:05.781936  699371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1101 10:16:05.781950  699371 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-821146 && echo "running-upgrade-821146" | sudo tee /etc/hostname
	I1101 10:16:05.925094  699371 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-821146
	
	I1101 10:16:05.925181  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:05.949781  699371 main.go:141] libmachine: Using SSH client type: native
	I1101 10:16:05.950288  699371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1101 10:16:05.950306  699371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-821146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-821146/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-821146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:16:06.079820  699371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:16:06.079874  699371 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:16:06.079918  699371 ubuntu.go:177] setting up certificates
	I1101 10:16:06.079930  699371 provision.go:83] configureAuth start
	I1101 10:16:06.079983  699371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-821146
	I1101 10:16:06.101602  699371 provision.go:138] copyHostCerts
	I1101 10:16:06.101656  699371 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:16:06.101663  699371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:16:06.101746  699371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:16:06.101899  699371 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:16:06.101906  699371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:16:06.101937  699371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:16:06.102030  699371 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:16:06.102035  699371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:16:06.102068  699371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:16:06.102132  699371 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-821146 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-821146]
	I1101 10:16:06.498831  699371 provision.go:172] copyRemoteCerts
	I1101 10:16:06.498925  699371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:16:06.498971  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:06.516584  699371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa Username:docker}
	I1101 10:16:06.605952  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:16:06.636427  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:16:06.667264  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 10:16:06.695934  699371 provision.go:86] duration metric: configureAuth took 615.989209ms
	I1101 10:16:06.695960  699371 ubuntu.go:193] setting minikube options for container-runtime
	I1101 10:16:06.696218  699371 config.go:182] Loaded profile config "running-upgrade-821146": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 10:16:06.696375  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:06.715868  699371 main.go:141] libmachine: Using SSH client type: native
	I1101 10:16:06.716373  699371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1101 10:16:06.716395  699371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:16:06.950746  699371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:16:06.950767  699371 machine.go:91] provisioned docker machine in 1.195021927s
	I1101 10:16:06.950778  699371 client.go:171] LocalClient.Create took 8.560598968s
	I1101 10:16:06.950796  699371 start.go:167] duration metric: libmachine.API.Create for "running-upgrade-821146" took 8.560661721s
	I1101 10:16:06.950805  699371 start.go:300] post-start starting for "running-upgrade-821146" (driver="docker")
	I1101 10:16:06.950818  699371 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:16:06.950901  699371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:16:06.950947  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:06.969330  699371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa Username:docker}
	I1101 10:16:07.061298  699371 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:16:07.065090  699371 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:16:07.065133  699371 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 10:16:07.065142  699371 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 10:16:07.065149  699371 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1101 10:16:07.065159  699371 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:16:07.065210  699371 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:16:07.065278  699371 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:16:07.065359  699371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:16:07.075732  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:16:07.105180  699371 start.go:303] post-start completed in 154.359527ms
	I1101 10:16:07.105587  699371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-821146
	I1101 10:16:07.123891  699371 profile.go:148] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/config.json ...
	I1101 10:16:07.124235  699371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:16:07.124288  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:07.142690  699371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa Username:docker}
	I1101 10:16:07.226281  699371 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:16:07.231432  699371 start.go:128] duration metric: createHost completed in 8.843291577s
	I1101 10:16:07.231453  699371 start.go:83] releasing machines lock for "running-upgrade-821146", held for 8.843488078s
	I1101 10:16:07.231545  699371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-821146
	I1101 10:16:07.249744  699371 ssh_runner.go:195] Run: cat /version.json
	I1101 10:16:07.249789  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:07.249808  699371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:16:07.249892  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:07.268819  699371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa Username:docker}
	I1101 10:16:07.270063  699371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa Username:docker}
	I1101 10:16:07.440865  699371 ssh_runner.go:195] Run: systemctl --version
	I1101 10:16:07.445726  699371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:16:07.589134  699371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 10:16:07.594609  699371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:16:07.619435  699371 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1101 10:16:07.619521  699371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:16:07.654993  699371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1101 10:16:07.655019  699371 start.go:472] detecting cgroup driver to use...
	I1101 10:16:07.655059  699371 detect.go:199] detected "systemd" cgroup driver on host os
	I1101 10:16:07.655143  699371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:16:07.677632  699371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:16:07.691922  699371 docker.go:203] disabling cri-docker service (if available) ...
	I1101 10:16:07.691972  699371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:16:07.709598  699371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:16:07.726745  699371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:16:07.805634  699371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:16:07.885654  699371 docker.go:219] disabling docker service ...
	I1101 10:16:07.885705  699371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:16:07.905591  699371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:16:07.918794  699371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:16:07.989293  699371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:16:08.132903  699371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:16:08.145461  699371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:16:08.164287  699371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 10:16:08.164339  699371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:08.177787  699371 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:16:08.177872  699371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:08.190191  699371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:08.201934  699371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:08.213108  699371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:16:08.223785  699371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:16:08.233765  699371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:16:08.244368  699371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:16:08.365413  699371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:16:08.470286  699371 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:16:08.470355  699371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:16:08.474603  699371 start.go:540] Will wait 60s for crictl version
	I1101 10:16:08.474654  699371 ssh_runner.go:195] Run: which crictl
	I1101 10:16:08.478550  699371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 10:16:08.515102  699371 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1101 10:16:08.515175  699371 ssh_runner.go:195] Run: crio --version
	I1101 10:16:08.554366  699371 ssh_runner.go:195] Run: crio --version
	I1101 10:16:08.595283  699371 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1101 10:16:06.121569  701603 delete.go:124] DEMOLISHING missing-upgrade-489499 ...
	I1101 10:16:06.121702  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:06.142301  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	W1101 10:16:06.142369  701603 stop.go:83] unable to get state: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:06.142395  701603 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:06.142944  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:06.162545  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:06.162657  701603 delete.go:82] Unable to get host status for missing-upgrade-489499, assuming it has already been deleted: state: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:06.162716  701603 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-489499
	W1101 10:16:06.183056  701603 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-489499 returned with exit code 1
	I1101 10:16:06.183123  701603 kic.go:371] could not find the container missing-upgrade-489499 to remove it. will try anyways
	I1101 10:16:06.183189  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:06.204234  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	W1101 10:16:06.204314  701603 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:06.204376  701603 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-489499 /bin/bash -c "sudo init 0"
	W1101 10:16:06.225005  701603 cli_runner.go:211] docker exec --privileged -t missing-upgrade-489499 /bin/bash -c "sudo init 0" returned with exit code 1
	I1101 10:16:06.225043  701603 oci.go:659] error shutdown missing-upgrade-489499: docker exec --privileged -t missing-upgrade-489499 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:07.226188  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:07.245554  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:07.245619  701603 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:07.245630  701603 oci.go:673] temporary error: container missing-upgrade-489499 status is  but expect it to be exited
	I1101 10:16:07.245668  701603 retry.go:31] will retry after 476.905631ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:07.723058  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:07.746952  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:07.747034  701603 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:07.747046  701603 oci.go:673] temporary error: container missing-upgrade-489499 status is  but expect it to be exited
	I1101 10:16:07.747085  701603 retry.go:31] will retry after 581.344514ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:08.329508  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:08.349421  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:08.349499  701603 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:08.349530  701603 oci.go:673] temporary error: container missing-upgrade-489499 status is  but expect it to be exited
	I1101 10:16:08.349566  701603 retry.go:31] will retry after 1.157346557s: couldn't verify container is exited. %v: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:09.508073  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:09.526274  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:09.526348  701603 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:09.526365  701603 oci.go:673] temporary error: container missing-upgrade-489499 status is  but expect it to be exited
	I1101 10:16:09.526408  701603 retry.go:31] will retry after 1.54856021s: couldn't verify container is exited. %v: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:07.829026  702757 out.go:252] * Updating the running docker "pause-297661" container ...
	I1101 10:16:07.829062  702757 machine.go:94] provisionDockerMachine start ...
	I1101 10:16:07.829140  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:07.852768  702757 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:07.853094  702757 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1101 10:16:07.853110  702757 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:16:08.000432  702757 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-297661
	
	I1101 10:16:08.000472  702757 ubuntu.go:182] provisioning hostname "pause-297661"
	I1101 10:16:08.000529  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:08.021993  702757 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:08.022335  702757 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1101 10:16:08.022364  702757 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-297661 && echo "pause-297661" | sudo tee /etc/hostname
	I1101 10:16:08.177125  702757 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-297661
	
	I1101 10:16:08.177208  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:08.196195  702757 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:08.196432  702757 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1101 10:16:08.196449  702757 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-297661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-297661/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-297661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:16:08.342357  702757 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:16:08.342395  702757 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:16:08.342424  702757 ubuntu.go:190] setting up certificates
	I1101 10:16:08.342452  702757 provision.go:84] configureAuth start
	I1101 10:16:08.342521  702757 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-297661
	I1101 10:16:08.361973  702757 provision.go:143] copyHostCerts
	I1101 10:16:08.362036  702757 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:16:08.362057  702757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:16:08.362136  702757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:16:08.362280  702757 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:16:08.362294  702757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:16:08.362336  702757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:16:08.362416  702757 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:16:08.362426  702757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:16:08.362473  702757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:16:08.362549  702757 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.pause-297661 san=[127.0.0.1 192.168.76.2 localhost minikube pause-297661]
	I1101 10:16:08.795899  702757 provision.go:177] copyRemoteCerts
	I1101 10:16:08.795996  702757 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:16:08.796044  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:08.816009  702757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/pause-297661/id_rsa Username:docker}
	I1101 10:16:08.920173  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:16:08.940082  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 10:16:08.960222  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:16:08.980904  702757 provision.go:87] duration metric: took 638.43581ms to configureAuth
	I1101 10:16:08.980941  702757 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:16:08.981216  702757 config.go:182] Loaded profile config "pause-297661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:16:08.981338  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:09.000166  702757 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:09.000386  702757 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1101 10:16:09.000401  702757 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:16:09.304122  702757 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:16:09.304149  702757 machine.go:97] duration metric: took 1.475078336s to provisionDockerMachine
	I1101 10:16:09.304161  702757 start.go:293] postStartSetup for "pause-297661" (driver="docker")
	I1101 10:16:09.304170  702757 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:16:09.304228  702757 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:16:09.304311  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:09.324145  702757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/pause-297661/id_rsa Username:docker}
	I1101 10:16:09.428554  702757 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:16:09.432869  702757 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:16:09.432900  702757 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:16:09.432914  702757 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:16:09.432967  702757 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:16:09.433038  702757 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:16:09.433124  702757 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:16:09.441819  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:16:09.461853  702757 start.go:296] duration metric: took 157.654927ms for postStartSetup
	I1101 10:16:09.461970  702757 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:16:09.462038  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:09.480979  702757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/pause-297661/id_rsa Username:docker}
	I1101 10:16:09.581620  702757 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:16:09.587448  702757 fix.go:56] duration metric: took 1.779037944s for fixHost
	I1101 10:16:09.587490  702757 start.go:83] releasing machines lock for "pause-297661", held for 1.779110221s
	I1101 10:16:09.587562  702757 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-297661
	I1101 10:16:09.606754  702757 ssh_runner.go:195] Run: cat /version.json
	I1101 10:16:09.606799  702757 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:16:09.606820  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:09.606901  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:09.626512  702757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/pause-297661/id_rsa Username:docker}
	I1101 10:16:09.626831  702757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/pause-297661/id_rsa Username:docker}
	I1101 10:16:09.788502  702757 ssh_runner.go:195] Run: systemctl --version
	I1101 10:16:09.797226  702757 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:16:09.845533  702757 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:16:09.850881  702757 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:16:09.850976  702757 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:16:09.861784  702757 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:16:09.861814  702757 start.go:496] detecting cgroup driver to use...
	I1101 10:16:09.861866  702757 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:16:09.861921  702757 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:16:09.882374  702757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:16:09.899051  702757 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:16:09.899121  702757 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:16:09.917594  702757 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:16:09.932649  702757 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:16:10.075203  702757 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:16:10.203330  702757 docker.go:234] disabling docker service ...
	I1101 10:16:10.203406  702757 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:16:10.221454  702757 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:16:10.237590  702757 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:16:10.349195  702757 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:16:10.472669  702757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:16:10.487166  702757 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:16:10.502510  702757 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:16:10.502577  702757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.513388  702757 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:16:10.513464  702757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.523180  702757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.533259  702757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.543161  702757 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:16:10.552762  702757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.563122  702757 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.572468  702757 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.582341  702757 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:16:10.590528  702757 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:16:10.598594  702757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:16:10.746345  702757 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:16:10.904374  702757 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:16:10.904458  702757 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:16:10.908949  702757 start.go:564] Will wait 60s for crictl version
	I1101 10:16:10.909008  702757 ssh_runner.go:195] Run: which crictl
	I1101 10:16:10.912982  702757 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:16:10.944347  702757 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:16:10.944439  702757 ssh_runner.go:195] Run: crio --version
	I1101 10:16:10.977060  702757 ssh_runner.go:195] Run: crio --version
	I1101 10:16:11.012702  702757 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:16:06.352849  701851 out.go:252] * Restarting existing docker container for "stopped-upgrade-333944" ...
	I1101 10:16:06.352923  701851 cli_runner.go:164] Run: docker start stopped-upgrade-333944
	I1101 10:16:06.607992  701851 cli_runner.go:164] Run: docker container inspect stopped-upgrade-333944 --format={{.State.Status}}
	I1101 10:16:06.628021  701851 kic.go:430] container "stopped-upgrade-333944" state is running.
	I1101 10:16:06.628463  701851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-333944
	I1101 10:16:06.648515  701851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/config.json ...
	I1101 10:16:06.648787  701851 machine.go:94] provisionDockerMachine start ...
	I1101 10:16:06.648898  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:06.669489  701851 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:06.669873  701851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:16:06.669895  701851 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:16:06.670578  701851 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45894->127.0.0.1:33118: read: connection reset by peer
	I1101 10:16:09.791872  701851 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-333944
	
	I1101 10:16:09.791904  701851 ubuntu.go:182] provisioning hostname "stopped-upgrade-333944"
	I1101 10:16:09.791976  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:09.813075  701851 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:09.813410  701851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:16:09.813432  701851 main.go:143] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-333944 && echo "stopped-upgrade-333944" | sudo tee /etc/hostname
	I1101 10:16:09.953231  701851 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-333944
	
	I1101 10:16:09.953315  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:09.984184  701851 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:09.984545  701851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:16:09.984577  701851 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-333944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-333944/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-333944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:16:10.106308  701851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:16:10.106345  701851 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:16:10.106385  701851 ubuntu.go:190] setting up certificates
	I1101 10:16:10.106398  701851 provision.go:84] configureAuth start
	I1101 10:16:10.106483  701851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-333944
	I1101 10:16:10.132251  701851 provision.go:143] copyHostCerts
	I1101 10:16:10.132339  701851 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:16:10.132363  701851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:16:10.132444  701851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:16:10.132635  701851 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:16:10.132652  701851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:16:10.132699  701851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:16:10.132800  701851 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:16:10.132813  701851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:16:10.132870  701851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:16:10.132966  701851 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-333944 san=[127.0.0.1 192.168.94.2 localhost minikube stopped-upgrade-333944]
	I1101 10:16:10.302095  701851 provision.go:177] copyRemoteCerts
	I1101 10:16:10.302158  701851 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:16:10.302195  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:10.321017  701851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/stopped-upgrade-333944/id_rsa Username:docker}
	I1101 10:16:10.411145  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:16:10.436867  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 10:16:10.463918  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:16:10.490495  701851 provision.go:87] duration metric: took 384.060553ms to configureAuth
	I1101 10:16:10.490524  701851 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:16:10.490724  701851 config.go:182] Loaded profile config "stopped-upgrade-333944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 10:16:10.490933  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:10.510890  701851 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:10.511213  701851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:16:10.511256  701851 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:16:10.784043  701851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:16:10.784073  701851 machine.go:97] duration metric: took 4.135267915s to provisionDockerMachine
	I1101 10:16:10.784089  701851 start.go:293] postStartSetup for "stopped-upgrade-333944" (driver="docker")
	I1101 10:16:10.784104  701851 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:16:10.784180  701851 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:16:10.784246  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:10.805012  701851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/stopped-upgrade-333944/id_rsa Username:docker}
	I1101 10:16:10.896783  701851 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:16:10.901066  701851 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:16:10.901105  701851 main.go:143] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 10:16:10.901117  701851 main.go:143] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 10:16:10.901126  701851 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1101 10:16:10.901140  701851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:16:10.901231  701851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:16:10.901342  701851 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:16:10.901472  701851 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:16:10.913144  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:16:10.943724  701851 start.go:296] duration metric: took 159.615875ms for postStartSetup
	I1101 10:16:10.943830  701851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:16:10.943912  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:10.965855  701851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/stopped-upgrade-333944/id_rsa Username:docker}
	I1101 10:16:11.052100  701851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:16:11.057612  701851 fix.go:56] duration metric: took 4.724957844s for fixHost
	I1101 10:16:11.057646  701851 start.go:83] releasing machines lock for "stopped-upgrade-333944", held for 4.725017272s
	I1101 10:16:11.057750  701851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-333944
	I1101 10:16:11.078080  701851 ssh_runner.go:195] Run: cat /version.json
	I1101 10:16:11.078137  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:11.078203  701851 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:16:11.078303  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:11.099544  701851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/stopped-upgrade-333944/id_rsa Username:docker}
	I1101 10:16:11.100064  701851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/stopped-upgrade-333944/id_rsa Username:docker}
	I1101 10:16:11.013749  702757 cli_runner.go:164] Run: docker network inspect pause-297661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:16:11.031500  702757 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:16:11.036304  702757 kubeadm.go:884] updating cluster {Name:pause-297661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-297661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:16:11.036482  702757 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:16:11.036527  702757 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:16:11.072565  702757 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:16:11.072594  702757 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:16:11.072666  702757 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:16:11.107582  702757 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:16:11.107609  702757 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:16:11.107616  702757 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:16:11.107738  702757 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-297661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-297661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:16:11.107823  702757 ssh_runner.go:195] Run: crio config
	I1101 10:16:11.173977  702757 cni.go:84] Creating CNI manager for ""
	I1101 10:16:11.174005  702757 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:16:11.174028  702757 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:16:11.174074  702757 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-297661 NodeName:pause-297661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:16:11.174253  702757 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-297661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:16:11.174339  702757 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:16:11.183787  702757 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:16:11.183872  702757 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:16:11.193049  702757 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1101 10:16:11.207551  702757 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:16:11.222031  702757 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1101 10:16:11.236266  702757 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:16:11.240796  702757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:16:11.386072  702757 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:16:11.401697  702757 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661 for IP: 192.168.76.2
	I1101 10:16:11.401719  702757 certs.go:195] generating shared ca certs ...
	I1101 10:16:11.401740  702757 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:11.401916  702757 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:16:11.401960  702757 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:16:11.401971  702757 certs.go:257] generating profile certs ...
	I1101 10:16:11.402077  702757 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/client.key
	I1101 10:16:11.402144  702757 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/apiserver.key.57c967b1
	I1101 10:16:11.402187  702757 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/proxy-client.key
	I1101 10:16:11.402305  702757 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:16:11.402352  702757 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:16:11.402363  702757 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:16:11.402388  702757 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:16:11.402412  702757 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:16:11.402438  702757 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:16:11.402480  702757 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:16:11.403217  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:16:11.424752  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:16:11.446715  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:16:11.467973  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:16:11.488705  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 10:16:11.511678  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:16:11.532379  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:16:11.552803  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:16:11.575984  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:16:11.595623  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:16:11.616628  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:16:11.636820  702757 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:16:11.654265  702757 ssh_runner.go:195] Run: openssl version
	I1101 10:16:11.662128  702757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:16:11.672327  702757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:11.676789  702757 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:11.676872  702757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:11.713780  702757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:16:11.723238  702757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:16:11.733307  702757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:16:11.741612  702757 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:16:11.741691  702757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:16:11.779720  702757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:16:11.790185  702757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:16:11.800988  702757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:16:11.805744  702757 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:16:11.805827  702757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:16:11.850866  702757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:16:11.860320  702757 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:16:11.864969  702757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:16:11.901057  702757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:16:11.940645  702757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:16:11.977684  702757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:16:12.017993  702757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:16:12.070203  702757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:16:12.107495  702757 kubeadm.go:401] StartCluster: {Name:pause-297661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-297661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:16:12.107666  702757 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:16:12.107751  702757 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:16:12.143186  702757 cri.go:89] found id: "540e6f288254c2f91c0b576e675ab75f176f33dc04857cd29478b2be023c0967"
	I1101 10:16:12.143211  702757 cri.go:89] found id: "ad61b10f8e140aeb0af6fd55e782e028e92c86d23d31f34a996fe6bee23d45e7"
	I1101 10:16:12.143217  702757 cri.go:89] found id: "11a7d411789fa6a12c87e30dddaad6f06e2d9ee1da69d65d8156525d726e8342"
	I1101 10:16:12.143221  702757 cri.go:89] found id: "0bd1538ac2657af6c6a5e8f373e61727a3b6a24642d5fc1bb8689a6cd54bc641"
	I1101 10:16:12.143225  702757 cri.go:89] found id: "24e09344febf421139bbbdae8d663120c3c223b397b6fa22e35806255e5a549b"
	I1101 10:16:12.143229  702757 cri.go:89] found id: "472cb4bf17c605290e55b8041352682602fbd3184fdcf7ae902cf8466aacac4c"
	I1101 10:16:12.143233  702757 cri.go:89] found id: "4cf89bdef43bcb6a8880f0173eb19d34c955c26650e304b2d61776b18a9f36c3"
	I1101 10:16:12.143237  702757 cri.go:89] found id: ""
	I1101 10:16:12.143290  702757 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:16:12.157926  702757 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:16:12Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:16:12.158041  702757 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:16:12.168802  702757 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:16:12.168827  702757 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:16:12.168902  702757 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:16:12.178272  702757 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:12.179142  702757 kubeconfig.go:125] found "pause-297661" server: "https://192.168.76.2:8443"
	I1101 10:16:12.180289  702757 kapi.go:59] client config for pause-297661: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/client.key", CAFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:16:12.180828  702757 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:16:12.180867  702757 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:16:12.180874  702757 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:16:12.180879  702757 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:16:12.180884  702757 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:16:12.181273  702757 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:16:12.190710  702757 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:16:12.190754  702757 kubeadm.go:602] duration metric: took 21.920894ms to restartPrimaryControlPlane
	I1101 10:16:12.190767  702757 kubeadm.go:403] duration metric: took 83.284759ms to StartCluster
	I1101 10:16:12.190791  702757 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:12.190881  702757 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:16:12.191880  702757 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:12.192211  702757 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:16:12.192289  702757 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:16:12.192428  702757 config.go:182] Loaded profile config "pause-297661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:16:12.195771  702757 out.go:179] * Enabled addons: 
	I1101 10:16:12.195777  702757 out.go:179] * Verifying Kubernetes components...
	W1101 10:16:11.281471  701851 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.32.0 -> Actual minikube version: v1.37.0
	I1101 10:16:11.281591  701851 ssh_runner.go:195] Run: systemctl --version
	I1101 10:16:11.291365  701851 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:16:11.436146  701851 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 10:16:11.442206  701851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:16:11.453568  701851 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1101 10:16:11.453650  701851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:16:11.464740  701851 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:16:11.464767  701851 start.go:496] detecting cgroup driver to use...
	I1101 10:16:11.464808  701851 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:16:11.464875  701851 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:16:11.480652  701851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:16:11.494788  701851 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:16:11.494872  701851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:16:11.509775  701851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:16:11.524327  701851 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:16:11.612398  701851 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:16:11.687117  701851 docker.go:234] disabling docker service ...
	I1101 10:16:11.687185  701851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:16:11.701436  701851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:16:11.714461  701851 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:16:11.782649  701851 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:16:11.864451  701851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:16:11.878041  701851 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:16:11.897473  701851 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 10:16:11.897541  701851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.909350  701851 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:16:11.909428  701851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.921813  701851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.933480  701851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.945803  701851 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:16:11.956476  701851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.968070  701851 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.986964  701851 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.998284  701851 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:16:12.008856  701851 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:16:12.019718  701851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:16:12.097482  701851 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:16:12.208745  701851 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:16:12.208815  701851 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:16:12.213688  701851 start.go:564] Will wait 60s for crictl version
	I1101 10:16:12.213767  701851 ssh_runner.go:195] Run: which crictl
	I1101 10:16:12.218352  701851 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 10:16:12.260074  701851 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1101 10:16:12.260162  701851 ssh_runner.go:195] Run: crio --version
	I1101 10:16:12.297205  701851 ssh_runner.go:195] Run: crio --version
	I1101 10:16:12.338661  701851 out.go:179] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1101 10:16:12.196742  702757 addons.go:515] duration metric: took 4.466984ms for enable addons: enabled=[]
	I1101 10:16:12.196787  702757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:16:12.351176  702757 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:16:12.366625  702757 node_ready.go:35] waiting up to 6m0s for node "pause-297661" to be "Ready" ...
	I1101 10:16:12.374920  702757 node_ready.go:49] node "pause-297661" is "Ready"
	I1101 10:16:12.374952  702757 node_ready.go:38] duration metric: took 8.284808ms for node "pause-297661" to be "Ready" ...
	I1101 10:16:12.374971  702757 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:16:12.375032  702757 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:16:12.388511  702757 api_server.go:72] duration metric: took 196.249946ms to wait for apiserver process to appear ...
	I1101 10:16:12.388545  702757 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:16:12.388574  702757 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:16:12.392829  702757 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:16:12.393793  702757 api_server.go:141] control plane version: v1.34.1
	I1101 10:16:12.393820  702757 api_server.go:131] duration metric: took 5.268141ms to wait for apiserver health ...
	I1101 10:16:12.393831  702757 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:16:12.397406  702757 system_pods.go:59] 7 kube-system pods found
	I1101 10:16:12.397466  702757 system_pods.go:61] "coredns-66bc5c9577-sdhft" [1680b086-3fa8-4b80-9705-650dcd1f0da2] Running
	I1101 10:16:12.397476  702757 system_pods.go:61] "etcd-pause-297661" [004f1413-5456-4433-a27c-e6d6cdebbeb7] Running
	I1101 10:16:12.397482  702757 system_pods.go:61] "kindnet-vlk6r" [263025a4-2ce5-48bc-805a-20a2a35bb5f2] Running
	I1101 10:16:12.397488  702757 system_pods.go:61] "kube-apiserver-pause-297661" [dd149e49-01fe-48e9-bb94-ba6f69de3812] Running
	I1101 10:16:12.397494  702757 system_pods.go:61] "kube-controller-manager-pause-297661" [0d0a0202-ad04-4392-af68-d0691f7cfb69] Running
	I1101 10:16:12.397505  702757 system_pods.go:61] "kube-proxy-5mqgt" [4c409377-301d-463a-8a0e-beb0afb959c7] Running
	I1101 10:16:12.397510  702757 system_pods.go:61] "kube-scheduler-pause-297661" [566a3183-af28-4b5c-a6da-ff5231371114] Running
	I1101 10:16:12.397520  702757 system_pods.go:74] duration metric: took 3.668609ms to wait for pod list to return data ...
	I1101 10:16:12.397535  702757 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:16:12.400095  702757 default_sa.go:45] found service account: "default"
	I1101 10:16:12.400126  702757 default_sa.go:55] duration metric: took 2.582356ms for default service account to be created ...
	I1101 10:16:12.400143  702757 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:16:12.403510  702757 system_pods.go:86] 7 kube-system pods found
	I1101 10:16:12.403546  702757 system_pods.go:89] "coredns-66bc5c9577-sdhft" [1680b086-3fa8-4b80-9705-650dcd1f0da2] Running
	I1101 10:16:12.403555  702757 system_pods.go:89] "etcd-pause-297661" [004f1413-5456-4433-a27c-e6d6cdebbeb7] Running
	I1101 10:16:12.403560  702757 system_pods.go:89] "kindnet-vlk6r" [263025a4-2ce5-48bc-805a-20a2a35bb5f2] Running
	I1101 10:16:12.403566  702757 system_pods.go:89] "kube-apiserver-pause-297661" [dd149e49-01fe-48e9-bb94-ba6f69de3812] Running
	I1101 10:16:12.403571  702757 system_pods.go:89] "kube-controller-manager-pause-297661" [0d0a0202-ad04-4392-af68-d0691f7cfb69] Running
	I1101 10:16:12.403576  702757 system_pods.go:89] "kube-proxy-5mqgt" [4c409377-301d-463a-8a0e-beb0afb959c7] Running
	I1101 10:16:12.403581  702757 system_pods.go:89] "kube-scheduler-pause-297661" [566a3183-af28-4b5c-a6da-ff5231371114] Running
	I1101 10:16:12.403592  702757 system_pods.go:126] duration metric: took 3.440505ms to wait for k8s-apps to be running ...
	I1101 10:16:12.403606  702757 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:16:12.403662  702757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:16:12.419217  702757 system_svc.go:56] duration metric: took 15.595949ms WaitForService to wait for kubelet
	I1101 10:16:12.419252  702757 kubeadm.go:587] duration metric: took 226.997941ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:16:12.419293  702757 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:16:12.422402  702757 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:16:12.422436  702757 node_conditions.go:123] node cpu capacity is 8
	I1101 10:16:12.422452  702757 node_conditions.go:105] duration metric: took 3.152854ms to run NodePressure ...
	I1101 10:16:12.422469  702757 start.go:242] waiting for startup goroutines ...
	I1101 10:16:12.422480  702757 start.go:247] waiting for cluster config update ...
	I1101 10:16:12.422490  702757 start.go:256] writing updated cluster config ...
	I1101 10:16:12.422873  702757 ssh_runner.go:195] Run: rm -f paused
	I1101 10:16:12.427386  702757 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:16:12.428018  702757 kapi.go:59] client config for pause-297661: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/client.key", CAFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:16:12.431429  702757 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sdhft" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:12.436393  702757 pod_ready.go:94] pod "coredns-66bc5c9577-sdhft" is "Ready"
	I1101 10:16:12.436421  702757 pod_ready.go:86] duration metric: took 4.968434ms for pod "coredns-66bc5c9577-sdhft" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:12.438702  702757 pod_ready.go:83] waiting for pod "etcd-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:12.443390  702757 pod_ready.go:94] pod "etcd-pause-297661" is "Ready"
	I1101 10:16:12.443422  702757 pod_ready.go:86] duration metric: took 4.688891ms for pod "etcd-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:12.445715  702757 pod_ready.go:83] waiting for pod "kube-apiserver-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:12.450509  702757 pod_ready.go:94] pod "kube-apiserver-pause-297661" is "Ready"
	I1101 10:16:12.450538  702757 pod_ready.go:86] duration metric: took 4.797086ms for pod "kube-apiserver-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:12.452691  702757 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:08.596290  699371 cli_runner.go:164] Run: docker network inspect running-upgrade-821146 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:16:08.615057  699371 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:16:08.619288  699371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:16:08.632234  699371 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 10:16:08.632310  699371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:16:08.700381  699371 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 10:16:08.700398  699371 crio.go:415] Images already preloaded, skipping extraction
	I1101 10:16:08.700460  699371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:16:08.739505  699371 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 10:16:08.739523  699371 cache_images.go:84] Images are preloaded, skipping loading
	I1101 10:16:08.739585  699371 ssh_runner.go:195] Run: crio config
	I1101 10:16:08.784002  699371 cni.go:84] Creating CNI manager for ""
	I1101 10:16:08.784020  699371 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:16:08.784045  699371 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 10:16:08.784067  699371 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-821146 NodeName:running-upgrade-821146 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:16:08.784225  699371 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "running-upgrade-821146"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:16:08.784287  699371 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=running-upgrade-821146 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-821146 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 10:16:08.784343  699371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 10:16:08.794771  699371 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:16:08.794863  699371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:16:08.804816  699371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1101 10:16:08.824898  699371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:16:08.847095  699371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1101 10:16:08.867313  699371 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:16:08.871578  699371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:16:08.884177  699371 certs.go:56] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146 for IP: 192.168.85.2
	I1101 10:16:08.884218  699371 certs.go:190] acquiring lock for shared ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:08.884382  699371 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:16:08.884417  699371 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:16:08.884459  699371 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/client.key
	I1101 10:16:08.884468  699371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/client.crt with IP's: []
	I1101 10:16:09.018306  699371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/client.crt ...
	I1101 10:16:09.018324  699371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/client.crt: {Name:mkebb948426e0df207ca499f0bf3906116d6ac56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:09.018532  699371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/client.key ...
	I1101 10:16:09.018591  699371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/client.key: {Name:mk6402d3ca5bae4d9ebd11f18db1c42a81b05ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:09.018691  699371 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.key.43b9df8c
	I1101 10:16:09.018701  699371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 10:16:09.164681  699371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.crt.43b9df8c ...
	I1101 10:16:09.164699  699371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.crt.43b9df8c: {Name:mk3ee3cd5185c3e81e853ca95204110a187312f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:09.164882  699371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.key.43b9df8c ...
	I1101 10:16:09.164892  699371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.key.43b9df8c: {Name:mk8aa052fbf9204e6e1f2ad1c3fb3404e44232f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:09.164962  699371 certs.go:337] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.crt
	I1101 10:16:09.165033  699371 certs.go:341] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.key
	I1101 10:16:09.165079  699371 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.key
	I1101 10:16:09.165088  699371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.crt with IP's: []
	I1101 10:16:09.284135  699371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.crt ...
	I1101 10:16:09.284153  699371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.crt: {Name:mk475c901dc2d91b0c1db1c5b6f81a461bff5868 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:09.284784  699371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.key ...
	I1101 10:16:09.284798  699371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.key: {Name:mk2db181e43018b8dd5dbaef19b77899d02377bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:09.285066  699371 certs.go:437] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:16:09.285119  699371 certs.go:433] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:16:09.285135  699371 certs.go:437] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:16:09.285165  699371 certs.go:437] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:16:09.285193  699371 certs.go:437] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:16:09.285225  699371 certs.go:437] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:16:09.285299  699371 certs.go:437] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:16:09.286247  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 10:16:09.314968  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:16:09.342938  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:16:09.369961  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:16:09.396624  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:16:09.424216  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:16:09.452710  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:16:09.481172  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:16:09.508188  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:16:09.538993  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:16:09.565324  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:16:09.593526  699371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:16:09.615194  699371 ssh_runner.go:195] Run: openssl version
	I1101 10:16:09.622020  699371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:16:09.635222  699371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:16:09.639518  699371 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:16:09.639580  699371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:16:09.647592  699371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:16:09.659265  699371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:16:09.669863  699371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:09.674262  699371 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:09.674334  699371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:09.682772  699371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:16:09.693544  699371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:16:09.704242  699371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:16:09.708035  699371 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:16:09.708102  699371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:16:09.715556  699371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:16:09.726628  699371 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 10:16:09.730778  699371 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 10:16:09.730849  699371 kubeadm.go:404] StartCluster: {Name:running-upgrade-821146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-821146 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 10:16:09.730930  699371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:16:09.731019  699371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:16:09.770023  699371 cri.go:89] found id: ""
	I1101 10:16:09.770092  699371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:16:09.780138  699371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:16:09.790516  699371 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:16:09.790578  699371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:16:09.801915  699371 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:16:09.801958  699371 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:16:09.904917  699371 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 10:16:09.986131  699371 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:16:12.831870  702757 pod_ready.go:94] pod "kube-controller-manager-pause-297661" is "Ready"
	I1101 10:16:12.831901  702757 pod_ready.go:86] duration metric: took 379.183696ms for pod "kube-controller-manager-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:13.032376  702757 pod_ready.go:83] waiting for pod "kube-proxy-5mqgt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:13.432411  702757 pod_ready.go:94] pod "kube-proxy-5mqgt" is "Ready"
	I1101 10:16:13.432440  702757 pod_ready.go:86] duration metric: took 400.034314ms for pod "kube-proxy-5mqgt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:13.631923  702757 pod_ready.go:83] waiting for pod "kube-scheduler-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:14.031711  702757 pod_ready.go:94] pod "kube-scheduler-pause-297661" is "Ready"
	I1101 10:16:14.031747  702757 pod_ready.go:86] duration metric: took 399.79457ms for pod "kube-scheduler-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:14.031762  702757 pod_ready.go:40] duration metric: took 1.604339868s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:16:14.079955  702757 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:16:14.081598  702757 out.go:179] * Done! kubectl is now configured to use "pause-297661" cluster and "default" namespace by default
	I1101 10:16:11.075221  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:11.097295  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:11.097367  701603 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:11.097385  701603 oci.go:673] temporary error: container missing-upgrade-489499 status is  but expect it to be exited
	I1101 10:16:11.097420  701603 retry.go:31] will retry after 2.299770178s: couldn't verify container is exited. %v: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:13.397970  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:13.416971  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:13.417035  701603 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:13.417045  701603 oci.go:673] temporary error: container missing-upgrade-489499 status is  but expect it to be exited
	I1101 10:16:13.417071  701603 retry.go:31] will retry after 4.406936807s: couldn't verify container is exited. %v: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:12.339671  701851 cli_runner.go:164] Run: docker network inspect stopped-upgrade-333944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:16:12.359438  701851 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 10:16:12.363686  701851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:16:12.377246  701851 kubeadm.go:884] updating cluster {Name:stopped-upgrade-333944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-333944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:16:12.377387  701851 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 10:16:12.377461  701851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:16:12.427006  701851 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:16:12.427032  701851 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:16:12.427084  701851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:16:12.468178  701851 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:16:12.468201  701851 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:16:12.468212  701851 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.3 crio true true} ...
	I1101 10:16:12.468328  701851 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-333944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-333944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:16:12.468397  701851 ssh_runner.go:195] Run: crio config
	I1101 10:16:12.516599  701851 cni.go:84] Creating CNI manager for ""
	I1101 10:16:12.516622  701851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:16:12.516660  701851 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:16:12.516696  701851 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-333944 NodeName:stopped-upgrade-333944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:16:12.516895  701851 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-333944"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:16:12.516978  701851 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 10:16:12.527182  701851 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:16:12.527261  701851 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:16:12.537459  701851 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 10:16:12.557409  701851 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:16:12.577191  701851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1101 10:16:12.599524  701851 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:16:12.603440  701851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:16:12.616384  701851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:16:12.686057  701851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:16:12.716434  701851 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944 for IP: 192.168.94.2
	I1101 10:16:12.716461  701851 certs.go:195] generating shared ca certs ...
	I1101 10:16:12.716486  701851 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:12.716650  701851 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:16:12.716688  701851 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:16:12.716698  701851 certs.go:257] generating profile certs ...
	I1101 10:16:12.716818  701851 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/client.key
	I1101 10:16:12.716874  701851 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.key.30e2cb39
	I1101 10:16:12.716892  701851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.crt.30e2cb39 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1101 10:16:13.013363  701851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.crt.30e2cb39 ...
	I1101 10:16:13.013403  701851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.crt.30e2cb39: {Name:mk3b5ec04d1c7859f7248b1b748749b10f12813e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:13.013629  701851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.key.30e2cb39 ...
	I1101 10:16:13.013652  701851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.key.30e2cb39: {Name:mkacc5dce1c72baecbfce14bbf129eb0f38259b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:13.013765  701851 certs.go:382] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.crt.30e2cb39 -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.crt
	I1101 10:16:13.013982  701851 certs.go:386] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.key.30e2cb39 -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.key
	I1101 10:16:13.014198  701851 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/proxy-client.key
	I1101 10:16:13.014347  701851 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:16:13.014393  701851 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:16:13.014407  701851 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:16:13.014439  701851 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:16:13.014474  701851 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:16:13.014511  701851 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:16:13.014568  701851 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:16:13.015193  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:16:13.043499  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:16:13.070630  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:16:13.097892  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:16:13.126004  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:16:13.153740  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:16:13.181473  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:16:13.208934  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:16:13.236773  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:16:13.265284  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:16:13.295326  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:16:13.323167  701851 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:16:13.344130  701851 ssh_runner.go:195] Run: openssl version
	I1101 10:16:13.350508  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:16:13.361695  701851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:16:13.365755  701851 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:16:13.365821  701851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:16:13.373089  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:16:13.383598  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:16:13.394961  701851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:13.399250  701851 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:13.399314  701851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:13.407144  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:16:13.419351  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:16:13.431908  701851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:16:13.436652  701851 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:16:13.436714  701851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:16:13.444463  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:16:13.454902  701851 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:16:13.459325  701851 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:16:13.467428  701851 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:16:13.475219  701851 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:16:13.483058  701851 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:16:13.491354  701851 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:16:13.499890  701851 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:16:13.507652  701851 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-333944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-333944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:16:13.507742  701851 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:16:13.507807  701851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:16:13.550876  701851 cri.go:89] found id: ""
	I1101 10:16:13.550950  701851 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W1101 10:16:13.562685  701851 kubeadm.go:414] apiserver tunnel failed: apiserver port not set
	I1101 10:16:13.562715  701851 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:16:13.562723  701851 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:16:13.562775  701851 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:16:13.573943  701851 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:13.574624  701851 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-333944" does not appear in /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:16:13.575107  701851 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-514161/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-333944" cluster setting kubeconfig missing "stopped-upgrade-333944" context setting]
	I1101 10:16:13.575732  701851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:13.576494  701851 kapi.go:59] client config for stopped-upgrade-333944: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/client.key", CAFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:16:13.576899  701851 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:16:13.576913  701851 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:16:13.576917  701851 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:16:13.576921  701851 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:16:13.576924  701851 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:16:13.577286  701851 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:16:13.587872  701851 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-01 10:15:50.528118420 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-01 10:16:12.596531486 +0000
	@@ -50,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: systemd
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I1101 10:16:13.587894  701851 kubeadm.go:1161] stopping kube-system containers ...
	I1101 10:16:13.587909  701851 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 10:16:13.587961  701851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:16:13.629731  701851 cri.go:89] found id: ""
	I1101 10:16:13.629820  701851 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 10:16:13.644473  701851 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:16:13.656215  701851 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Nov  1 10:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov  1 10:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Nov  1 10:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov  1 10:15 /etc/kubernetes/scheduler.conf
	
	I1101 10:16:13.656287  701851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1101 10:16:13.668295  701851 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:13.668366  701851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:16:13.678955  701851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1101 10:16:13.690497  701851 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:13.690562  701851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:16:13.701497  701851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1101 10:16:13.712114  701851 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:13.712198  701851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:16:13.722460  701851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1101 10:16:13.733124  701851 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:13.733184  701851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:16:13.743829  701851 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:16:13.754675  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:16:13.814900  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:16:14.707071  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:16:14.893553  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:16:14.966450  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:16:15.041070  701851 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:16:15.041164  701851 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:16:15.541591  701851 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:16:16.041955  701851 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:16:16.062370  701851 api_server.go:72] duration metric: took 1.021316236s to wait for apiserver process to appear ...
	I1101 10:16:16.062394  701851 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:16:16.062414  701851 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	
	
	==> CRI-O <==
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.845388974Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.846238103Z" level=info msg="Conmon does support the --sync option"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.846256487Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.846269728Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.846977274Z" level=info msg="Conmon does support the --sync option"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.846993715Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.851169164Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.851203449Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.851808429Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.852328265Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.852396055Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.858331013Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.89779886Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-sdhft Namespace:kube-system ID:2d6173903cf69fd71a52f980550120f31b77ecd258d533ec4380ab058a5e9104 UID:1680b086-3fa8-4b80-9705-650dcd1f0da2 NetNS:/var/run/netns/d8bb2587-6396-4438-9db9-43295182b658 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00053c0b8}] Aliases:map[]}"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898062459Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-sdhft for CNI network kindnet (type=ptp)"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898611834Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898634039Z" level=info msg="Starting seccomp notifier watcher"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898683389Z" level=info msg="Create NRI interface"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898811047Z" level=info msg="built-in NRI default validator is disabled"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898822189Z" level=info msg="runtime interface created"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898861498Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.89887145Z" level=info msg="runtime interface starting up..."
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898879815Z" level=info msg="starting plugins..."
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898894911Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.899306442Z" level=info msg="No systemd watchdog enabled"
	Nov 01 10:16:10 pause-297661 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	540e6f288254c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   2d6173903cf69       coredns-66bc5c9577-sdhft               kube-system
	ad61b10f8e140       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   23 seconds ago      Running             kube-proxy                0                   321500a0559f0       kube-proxy-5mqgt                       kube-system
	11a7d411789fa       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   a9022fc85dd84       kindnet-vlk6r                          kube-system
	0bd1538ac2657       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   40 seconds ago      Running             kube-controller-manager   0                   41786b85536e3       kube-controller-manager-pause-297661   kube-system
	24e09344febf4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   40 seconds ago      Running             kube-scheduler            0                   90ed41db61fc5       kube-scheduler-pause-297661            kube-system
	472cb4bf17c60       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   41 seconds ago      Running             kube-apiserver            0                   2fb0cdec64231       kube-apiserver-pause-297661            kube-system
	4cf89bdef43bc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   41 seconds ago      Running             etcd                      0                   ce4b61eb3b0b5       etcd-pause-297661                      kube-system
	
	
	==> coredns [540e6f288254c2f91c0b576e675ab75f176f33dc04857cd29478b2be023c0967] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35693 - 28569 "HINFO IN 7267284124165664637.5832267534672565079. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.078967336s
	
	
	==> describe nodes <==
	Name:               pause-297661
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-297661
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=pause-297661
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_15_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:15:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-297661
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:16:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:16:04 +0000   Sat, 01 Nov 2025 10:15:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:16:04 +0000   Sat, 01 Nov 2025 10:15:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:16:04 +0000   Sat, 01 Nov 2025 10:15:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:16:04 +0000   Sat, 01 Nov 2025 10:16:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-297661
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                217a4a88-d1dc-46a4-b597-55c22a5e81c2
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-sdhft                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-297661                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-vlk6r                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-297661             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-pause-297661    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-5mqgt                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-297661             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node pause-297661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node pause-297661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node pause-297661 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node pause-297661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node pause-297661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node pause-297661 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node pause-297661 event: Registered Node pause-297661 in Controller
	  Normal  NodeReady                13s                kubelet          Node pause-297661 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [4cf89bdef43bcb6a8880f0173eb19d34c955c26650e304b2d61776b18a9f36c3] <==
	{"level":"warn","ts":"2025-11-01T10:15:44.008098Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"243.235523ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356351100809765 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:kube-scheduler\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:kube-scheduler\" value_size:1768 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-01T10:15:44.008184Z","caller":"traceutil/trace.go:172","msg":"trace[1878441020] transaction","detail":"{read_only:false; response_revision:101; number_of_response:1; }","duration":"350.244899ms","start":"2025-11-01T10:15:43.657926Z","end":"2025-11-01T10:15:44.008171Z","steps":["trace[1878441020] 'process raft request'  (duration: 106.88717ms)","trace[1878441020] 'compare'  (duration: 243.076018ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:15:44.008228Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:15:43.657903Z","time spent":"350.30848ms","remote":"127.0.0.1:55496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1820,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:kube-scheduler\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:kube-scheduler\" value_size:1768 >> failure:<>"}
	{"level":"warn","ts":"2025-11-01T10:15:44.434563Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"251.393563ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356351100809767 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:controller:attachdetach-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:controller:attachdetach-controller\" value_size:865 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-01T10:15:44.434636Z","caller":"traceutil/trace.go:172","msg":"trace[755911] linearizableReadLoop","detail":"{readStateIndex:106; appliedIndex:105; }","duration":"133.07005ms","start":"2025-11-01T10:15:44.301555Z","end":"2025-11-01T10:15:44.434625Z","steps":["trace[755911] 'read index received'  (duration: 41.009µs)","trace[755911] 'applied index is now lower than readState.Index'  (duration: 133.028519ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:15:44.434690Z","caller":"traceutil/trace.go:172","msg":"trace[608754834] transaction","detail":"{read_only:false; response_revision:102; number_of_response:1; }","duration":"421.922905ms","start":"2025-11-01T10:15:44.012719Z","end":"2025-11-01T10:15:44.434642Z","steps":["trace[608754834] 'process raft request'  (duration: 170.403348ms)","trace[608754834] 'compare'  (duration: 251.26647ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:15:44.434739Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.181975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-01T10:15:44.434770Z","caller":"traceutil/trace.go:172","msg":"trace[630951225] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:102; }","duration":"133.219197ms","start":"2025-11-01T10:15:44.301542Z","end":"2025-11-01T10:15:44.434762Z","steps":["trace[630951225] 'agreement among raft nodes before linearized reading'  (duration: 133.146566ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:15:44.434802Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:15:44.012701Z","time spent":"422.053583ms","remote":"127.0.0.1:55496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":937,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:controller:attachdetach-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:controller:attachdetach-controller\" value_size:865 >> failure:<>"}
	{"level":"warn","ts":"2025-11-01T10:15:44.691814Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.904491ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356351100809773 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-297661.1873da82bef39798\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-297661.1873da82bef39798\" value_size:544 lease:6414984314246033960 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-01T10:15:44.692051Z","caller":"traceutil/trace.go:172","msg":"trace[353057931] transaction","detail":"{read_only:false; response_revision:105; number_of_response:1; }","duration":"239.388556ms","start":"2025-11-01T10:15:44.452653Z","end":"2025-11-01T10:15:44.692041Z","steps":["trace[353057931] 'process raft request'  (duration: 239.34498ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:15:44.692061Z","caller":"traceutil/trace.go:172","msg":"trace[1926171201] transaction","detail":"{read_only:false; response_revision:104; number_of_response:1; }","duration":"249.727729ms","start":"2025-11-01T10:15:44.442320Z","end":"2025-11-01T10:15:44.692048Z","steps":["trace[1926171201] 'process raft request'  (duration: 249.627974ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:15:44.692063Z","caller":"traceutil/trace.go:172","msg":"trace[1163522496] transaction","detail":"{read_only:false; response_revision:103; number_of_response:1; }","duration":"251.897638ms","start":"2025-11-01T10:15:44.440138Z","end":"2025-11-01T10:15:44.692035Z","steps":["trace[1163522496] 'process raft request'  (duration: 119.726489ms)","trace[1163522496] 'compare'  (duration: 131.803742ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:15:44.788476Z","caller":"traceutil/trace.go:172","msg":"trace[1301626639] transaction","detail":"{read_only:false; response_revision:106; number_of_response:1; }","duration":"144.748934ms","start":"2025-11-01T10:15:44.643704Z","end":"2025-11-01T10:15:44.788453Z","steps":["trace[1301626639] 'process raft request'  (duration: 144.64552ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:15:45.025255Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.430804ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356351100809778 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-297661.1873da82bfcc508d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-297661.1873da82bfcc508d\" value_size:598 lease:6414984314246033960 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-01T10:15:45.025400Z","caller":"traceutil/trace.go:172","msg":"trace[1101190000] transaction","detail":"{read_only:false; response_revision:108; number_of_response:1; }","duration":"234.299987ms","start":"2025-11-01T10:15:44.791088Z","end":"2025-11-01T10:15:45.025388Z","steps":["trace[1101190000] 'process raft request'  (duration: 234.255895ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:15:45.025465Z","caller":"traceutil/trace.go:172","msg":"trace[1161192082] transaction","detail":"{read_only:false; response_revision:107; number_of_response:1; }","duration":"330.046455ms","start":"2025-11-01T10:15:44.695390Z","end":"2025-11-01T10:15:45.025437Z","steps":["trace[1161192082] 'process raft request'  (duration: 206.367236ms)","trace[1161192082] 'compare'  (duration: 123.324064ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:15:45.025573Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:15:44.695375Z","time spent":"330.156011ms","remote":"127.0.0.1:54912","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":670,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-297661.1873da82bfcc508d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-297661.1873da82bfcc508d\" value_size:598 lease:6414984314246033960 >> failure:<>"}
	{"level":"info","ts":"2025-11-01T10:15:45.135121Z","caller":"traceutil/trace.go:172","msg":"trace[774693436] transaction","detail":"{read_only:false; response_revision:110; number_of_response:1; }","duration":"105.448573ms","start":"2025-11-01T10:15:45.029653Z","end":"2025-11-01T10:15:45.135101Z","steps":["trace[774693436] 'process raft request'  (duration: 105.40472ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:15:45.135168Z","caller":"traceutil/trace.go:172","msg":"trace[709169259] transaction","detail":"{read_only:false; response_revision:109; number_of_response:1; }","duration":"107.14028ms","start":"2025-11-01T10:15:45.027988Z","end":"2025-11-01T10:15:45.135128Z","steps":["trace[709169259] 'process raft request'  (duration: 102.355198ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:16:04.656697Z","caller":"traceutil/trace.go:172","msg":"trace[1659511800] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"212.729695ms","start":"2025-11-01T10:16:04.443945Z","end":"2025-11-01T10:16:04.656674Z","steps":["trace[1659511800] 'process raft request'  (duration: 212.569634ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:16:04.778298Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.338118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:16:04.778384Z","caller":"traceutil/trace.go:172","msg":"trace[726824992] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:420; }","duration":"120.434958ms","start":"2025-11-01T10:16:04.657929Z","end":"2025-11-01T10:16:04.778364Z","steps":["trace[726824992] 'agreement among raft nodes before linearized reading'  (duration: 60.704216ms)","trace[726824992] 'range keys from in-memory index tree'  (duration: 59.598425ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:16:04.778447Z","caller":"traceutil/trace.go:172","msg":"trace[452978060] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"331.325731ms","start":"2025-11-01T10:16:04.447099Z","end":"2025-11-01T10:16:04.778425Z","steps":["trace[452978060] 'process raft request'  (duration: 271.534053ms)","trace[452978060] 'compare'  (duration: 59.611955ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:16:04.778788Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:16:04.447079Z","time spent":"331.446048ms","remote":"127.0.0.1:55132","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5421,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/pause-297661\" mod_revision:340 > success:<request_put:<key:\"/registry/minions/pause-297661\" value_size:5383 >> failure:<request_range:<key:\"/registry/minions/pause-297661\" > >"}
	
	
	==> kernel <==
	 10:16:17 up  2:58,  0 user,  load average: 5.53, 1.98, 2.08
	Linux pause-297661 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [11a7d411789fa6a12c87e30dddaad6f06e2d9ee1da69d65d8156525d726e8342] <==
	I1101 10:15:53.788812       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:15:53.854915       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:15:53.855079       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:15:53.855096       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:15:53.855136       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:15:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:15:54.057558       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:15:54.057583       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:15:54.057597       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:15:54.155913       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:15:54.457773       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:15:54.457799       1 metrics.go:72] Registering metrics
	I1101 10:15:54.457911       1 controller.go:711] "Syncing nftables rules"
	I1101 10:16:04.057855       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:16:04.057955       1 main.go:301] handling current node
	I1101 10:16:14.060267       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:16:14.060310       1 main.go:301] handling current node
	
	
	==> kube-apiserver [472cb4bf17c605290e55b8041352682602fbd3184fdcf7ae902cf8466aacac4c] <==
	I1101 10:15:39.099781       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:15:39.099819       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:15:39.100227       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:15:39.107106       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:15:39.111164       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:15:39.123686       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:15:39.126021       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:15:39.137239       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:15:40.058251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:15:40.129534       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:15:40.129634       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:15:45.580151       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:15:45.640039       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:15:45.708138       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:15:45.716914       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 10:15:45.718886       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:15:45.725483       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:15:46.088073       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:15:46.815391       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:15:46.826442       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:15:46.835419       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:15:51.982601       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:15:52.032285       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:15:52.037987       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:15:52.181676       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [0bd1538ac2657af6c6a5e8f373e61727a3b6a24642d5fc1bb8689a6cd54bc641] <==
	I1101 10:15:51.077016       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:15:51.078173       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:15:51.078268       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:15:51.078290       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:15:51.078365       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:15:51.078366       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:15:51.078511       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:15:51.078533       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:15:51.078682       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:15:51.079030       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:15:51.079037       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:15:51.079140       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:15:51.079148       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:15:51.079216       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-297661"
	I1101 10:15:51.079282       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:15:51.080557       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:15:51.080587       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:15:51.080639       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:15:51.080653       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:15:51.080952       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:15:51.080955       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:15:51.083138       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:15:51.091356       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:15:51.101755       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:16:06.082157       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ad61b10f8e140aeb0af6fd55e782e028e92c86d23d31f34a996fe6bee23d45e7] <==
	I1101 10:15:53.609484       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:15:53.677206       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:15:53.777882       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:15:53.777951       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:15:53.778044       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:15:53.797661       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:15:53.797717       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:15:53.803250       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:15:53.803697       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:15:53.803716       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:15:53.805336       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:15:53.805362       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:15:53.805398       1 config.go:200] "Starting service config controller"
	I1101 10:15:53.805421       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:15:53.805417       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:15:53.805441       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:15:53.805482       1 config.go:309] "Starting node config controller"
	I1101 10:15:53.805487       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:15:53.805502       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:15:53.905556       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:15:53.905595       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:15:53.905570       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [24e09344febf421139bbbdae8d663120c3c223b397b6fa22e35806255e5a549b] <==
	E1101 10:15:40.505386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:15:40.554779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:15:40.626354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:15:40.677239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:15:41.924339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:15:41.962826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:15:42.068516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:15:42.146457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:15:42.204125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:15:42.220502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:15:42.232867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:15:42.271254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:15:42.387031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:15:42.426440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:15:42.446734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:15:42.446906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:15:42.488619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:15:42.921494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:15:43.065008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:15:43.176236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:15:43.184917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:15:43.287719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 10:15:43.656980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:15:45.238774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1101 10:15:47.788415       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:15:47 pause-297661 kubelet[1340]: I1101 10:15:47.724388    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-297661" podStartSLOduration=3.72436216 podStartE2EDuration="3.72436216s" podCreationTimestamp="2025-11-01 10:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:15:47.710667469 +0000 UTC m=+1.127476042" watchObservedRunningTime="2025-11-01 10:15:47.72436216 +0000 UTC m=+1.141170733"
	Nov 01 10:15:51 pause-297661 kubelet[1340]: I1101 10:15:51.111758    1340 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:15:51 pause-297661 kubelet[1340]: I1101 10:15:51.112628    1340 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.315973    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vldrr\" (UniqueName: \"kubernetes.io/projected/4c409377-301d-463a-8a0e-beb0afb959c7-kube-api-access-vldrr\") pod \"kube-proxy-5mqgt\" (UID: \"4c409377-301d-463a-8a0e-beb0afb959c7\") " pod="kube-system/kube-proxy-5mqgt"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316020    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/263025a4-2ce5-48bc-805a-20a2a35bb5f2-lib-modules\") pod \"kindnet-vlk6r\" (UID: \"263025a4-2ce5-48bc-805a-20a2a35bb5f2\") " pod="kube-system/kindnet-vlk6r"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316040    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c409377-301d-463a-8a0e-beb0afb959c7-xtables-lock\") pod \"kube-proxy-5mqgt\" (UID: \"4c409377-301d-463a-8a0e-beb0afb959c7\") " pod="kube-system/kube-proxy-5mqgt"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316057    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c409377-301d-463a-8a0e-beb0afb959c7-kube-proxy\") pod \"kube-proxy-5mqgt\" (UID: \"4c409377-301d-463a-8a0e-beb0afb959c7\") " pod="kube-system/kube-proxy-5mqgt"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316159    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/263025a4-2ce5-48bc-805a-20a2a35bb5f2-cni-cfg\") pod \"kindnet-vlk6r\" (UID: \"263025a4-2ce5-48bc-805a-20a2a35bb5f2\") " pod="kube-system/kindnet-vlk6r"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316188    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/263025a4-2ce5-48bc-805a-20a2a35bb5f2-xtables-lock\") pod \"kindnet-vlk6r\" (UID: \"263025a4-2ce5-48bc-805a-20a2a35bb5f2\") " pod="kube-system/kindnet-vlk6r"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316218    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c409377-301d-463a-8a0e-beb0afb959c7-lib-modules\") pod \"kube-proxy-5mqgt\" (UID: \"4c409377-301d-463a-8a0e-beb0afb959c7\") " pod="kube-system/kube-proxy-5mqgt"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316264    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk6sz\" (UniqueName: \"kubernetes.io/projected/263025a4-2ce5-48bc-805a-20a2a35bb5f2-kube-api-access-mk6sz\") pod \"kindnet-vlk6r\" (UID: \"263025a4-2ce5-48bc-805a-20a2a35bb5f2\") " pod="kube-system/kindnet-vlk6r"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.739128    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5mqgt" podStartSLOduration=1.739104516 podStartE2EDuration="1.739104516s" podCreationTimestamp="2025-11-01 10:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:15:53.728453918 +0000 UTC m=+7.145262491" watchObservedRunningTime="2025-11-01 10:15:53.739104516 +0000 UTC m=+7.155913090"
	Nov 01 10:15:56 pause-297661 kubelet[1340]: I1101 10:15:56.830425    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vlk6r" podStartSLOduration=4.830399347 podStartE2EDuration="4.830399347s" podCreationTimestamp="2025-11-01 10:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:15:53.739090826 +0000 UTC m=+7.155899422" watchObservedRunningTime="2025-11-01 10:15:56.830399347 +0000 UTC m=+10.247207930"
	Nov 01 10:16:04 pause-297661 kubelet[1340]: I1101 10:16:04.441928    1340 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:16:04 pause-297661 kubelet[1340]: I1101 10:16:04.899856    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pctk\" (UniqueName: \"kubernetes.io/projected/1680b086-3fa8-4b80-9705-650dcd1f0da2-kube-api-access-4pctk\") pod \"coredns-66bc5c9577-sdhft\" (UID: \"1680b086-3fa8-4b80-9705-650dcd1f0da2\") " pod="kube-system/coredns-66bc5c9577-sdhft"
	Nov 01 10:16:04 pause-297661 kubelet[1340]: I1101 10:16:04.899923    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1680b086-3fa8-4b80-9705-650dcd1f0da2-config-volume\") pod \"coredns-66bc5c9577-sdhft\" (UID: \"1680b086-3fa8-4b80-9705-650dcd1f0da2\") " pod="kube-system/coredns-66bc5c9577-sdhft"
	Nov 01 10:16:05 pause-297661 kubelet[1340]: I1101 10:16:05.784447    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sdhft" podStartSLOduration=13.784416618 podStartE2EDuration="13.784416618s" podCreationTimestamp="2025-11-01 10:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:16:05.769160084 +0000 UTC m=+19.185968675" watchObservedRunningTime="2025-11-01 10:16:05.784416618 +0000 UTC m=+19.201225192"
	Nov 01 10:16:10 pause-297661 kubelet[1340]: W1101 10:16:10.764142    1340 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 01 10:16:10 pause-297661 kubelet[1340]: E1101 10:16:10.764256    1340 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 01 10:16:10 pause-297661 kubelet[1340]: E1101 10:16:10.764320    1340 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 10:16:10 pause-297661 kubelet[1340]: E1101 10:16:10.764332    1340 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 10:16:14 pause-297661 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:16:14 pause-297661 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:16:14 pause-297661 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:16:14 pause-297661 systemd[1]: kubelet.service: Consumed 1.348s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-297661 -n pause-297661
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-297661 -n pause-297661: exit status 2 (411.743277ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-297661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-297661
helpers_test.go:243: (dbg) docker inspect pause-297661:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90",
	        "Created": "2025-11-01T10:15:13.339733244Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 689939,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:15:13.417600707Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90/hostname",
	        "HostsPath": "/var/lib/docker/containers/f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90/hosts",
	        "LogPath": "/var/lib/docker/containers/f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90/f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90-json.log",
	        "Name": "/pause-297661",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-297661:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-297661",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f9246503bec068542ebbf0c0fd0637a1feac664fea5105da98a3ad0ffa7a9b90",
	                "LowerDir": "/var/lib/docker/overlay2/313b7c587eb9ab28ab9a9c5d9821c3876d2c9e40813fd4886b498b4cecc1f623-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/313b7c587eb9ab28ab9a9c5d9821c3876d2c9e40813fd4886b498b4cecc1f623/merged",
	                "UpperDir": "/var/lib/docker/overlay2/313b7c587eb9ab28ab9a9c5d9821c3876d2c9e40813fd4886b498b4cecc1f623/diff",
	                "WorkDir": "/var/lib/docker/overlay2/313b7c587eb9ab28ab9a9c5d9821c3876d2c9e40813fd4886b498b4cecc1f623/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-297661",
	                "Source": "/var/lib/docker/volumes/pause-297661/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-297661",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-297661",
	                "name.minikube.sigs.k8s.io": "pause-297661",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6429c075855c32480c1084a5d9e66d68c1e469a3cf9074b8dcfd4934cf5211bc",
	            "SandboxKey": "/var/run/docker/netns/6429c075855c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-297661": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:c1:a8:df:7c:b8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5efbbe29eca3cfcfada3bb9d99b9f97315c4248dc80ea0279fc1c930d5dd1b99",
	                    "EndpointID": "51b7b788e0002ced01b2b1e9614f8fd65f8a66159065a411c9943645fd6a8a2d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-297661",
	                        "f9246503bec0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-297661 -n pause-297661
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-297661 -n pause-297661: exit status 2 (360.640313ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-297661 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-297661 logs -n 25: (1.086746423s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-473081 --memory=3072 --driver=docker  --container-runtime=crio                                  │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │ 01 Nov 25 10:13 UTC │
	│ stop    │ -p scheduled-stop-473081 --schedule 5m                                                                            │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 5m                                                                            │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 5m                                                                            │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 15s                                                                           │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 15s                                                                           │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 15s                                                                           │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --cancel-scheduled                                                                       │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │ 01 Nov 25 10:13 UTC │
	│ stop    │ -p scheduled-stop-473081 --schedule 15s                                                                           │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 15s                                                                           │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ stop    │ -p scheduled-stop-473081 --schedule 15s                                                                           │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ delete  │ -p scheduled-stop-473081                                                                                          │ scheduled-stop-473081       │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ start   │ -p insufficient-storage-500399 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio  │ insufficient-storage-500399 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ delete  │ -p insufficient-storage-500399                                                                                    │ insufficient-storage-500399 │ jenkins │ v1.37.0 │ 01 Nov 25 10:15 UTC │ 01 Nov 25 10:15 UTC │
	│ start   │ -p offline-crio-286433 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ offline-crio-286433         │ jenkins │ v1.37.0 │ 01 Nov 25 10:15 UTC │ 01 Nov 25 10:15 UTC │
	│ start   │ -p pause-297661 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio         │ pause-297661                │ jenkins │ v1.37.0 │ 01 Nov 25 10:15 UTC │ 01 Nov 25 10:16 UTC │
	│ start   │ -p stopped-upgrade-333944 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ stopped-upgrade-333944      │ jenkins │ v1.32.0 │ 01 Nov 25 10:15 UTC │ 01 Nov 25 10:16 UTC │
	│ start   │ -p missing-upgrade-489499 --memory=3072 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-489499      │ jenkins │ v1.32.0 │ 01 Nov 25 10:15 UTC │ 01 Nov 25 10:16 UTC │
	│ delete  │ -p offline-crio-286433                                                                                            │ offline-crio-286433         │ jenkins │ v1.37.0 │ 01 Nov 25 10:15 UTC │ 01 Nov 25 10:15 UTC │
	│ start   │ -p running-upgrade-821146 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ running-upgrade-821146      │ jenkins │ v1.32.0 │ 01 Nov 25 10:15 UTC │                     │
	│ stop    │ stopped-upgrade-333944 stop                                                                                       │ stopped-upgrade-333944      │ jenkins │ v1.32.0 │ 01 Nov 25 10:16 UTC │ 01 Nov 25 10:16 UTC │
	│ start   │ -p missing-upgrade-489499 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ missing-upgrade-489499      │ jenkins │ v1.37.0 │ 01 Nov 25 10:16 UTC │                     │
	│ start   │ -p stopped-upgrade-333944 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ stopped-upgrade-333944      │ jenkins │ v1.37.0 │ 01 Nov 25 10:16 UTC │                     │
	│ start   │ -p pause-297661 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                  │ pause-297661                │ jenkins │ v1.37.0 │ 01 Nov 25 10:16 UTC │ 01 Nov 25 10:16 UTC │
	│ pause   │ -p pause-297661 --alsologtostderr -v=5                                                                            │ pause-297661                │ jenkins │ v1.37.0 │ 01 Nov 25 10:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:16:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:16:07.596320  702757 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:16:07.596641  702757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:07.596652  702757 out.go:374] Setting ErrFile to fd 2...
	I1101 10:16:07.596659  702757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:07.596897  702757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:16:07.597379  702757 out.go:368] Setting JSON to false
	I1101 10:16:07.598543  702757 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10705,"bootTime":1761981463,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:16:07.598657  702757 start.go:143] virtualization: kvm guest
	I1101 10:16:07.600350  702757 out.go:179] * [pause-297661] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:16:07.601611  702757 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:16:07.601655  702757 notify.go:221] Checking for updates...
	I1101 10:16:07.603465  702757 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:16:07.604468  702757 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:16:07.605401  702757 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:16:07.606437  702757 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:16:07.607465  702757 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:16:07.608953  702757 config.go:182] Loaded profile config "pause-297661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:16:07.609487  702757 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:16:07.637761  702757 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:16:07.637960  702757 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:16:07.705603  702757 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 10:16:07.694412155 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:16:07.705715  702757 docker.go:319] overlay module found
	I1101 10:16:07.707281  702757 out.go:179] * Using the docker driver based on existing profile
	I1101 10:16:07.708309  702757 start.go:309] selected driver: docker
	I1101 10:16:07.708329  702757 start.go:930] validating driver "docker" against &{Name:pause-297661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-297661 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:16:07.708468  702757 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:16:07.708552  702757 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:16:07.779796  702757 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 10:16:07.768562943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:16:07.780588  702757 cni.go:84] Creating CNI manager for ""
	I1101 10:16:07.780660  702757 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:16:07.780706  702757 start.go:353] cluster config:
	{Name:pause-297661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-297661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:16:07.782237  702757 out.go:179] * Starting "pause-297661" primary control-plane node in "pause-297661" cluster
	I1101 10:16:07.783152  702757 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:16:07.784152  702757 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:16:07.785060  702757 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:16:07.785123  702757 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:16:07.785138  702757 cache.go:59] Caching tarball of preloaded images
	I1101 10:16:07.785158  702757 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:16:07.785243  702757 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:16:07.785260  702757 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:16:07.785447  702757 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/config.json ...
	I1101 10:16:07.808195  702757 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:16:07.808218  702757 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:16:07.808241  702757 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:16:07.808282  702757 start.go:360] acquireMachinesLock for pause-297661: {Name:mk059299f77c9dd6878046d3e145d080b4a2defd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:16:07.808366  702757 start.go:364] duration metric: took 47.267µs to acquireMachinesLock for "pause-297661"
	I1101 10:16:07.808390  702757 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:16:07.808401  702757 fix.go:54] fixHost starting: 
	I1101 10:16:07.808670  702757 cli_runner.go:164] Run: docker container inspect pause-297661 --format={{.State.Status}}
	I1101 10:16:07.827583  702757 fix.go:112] recreateIfNeeded on pause-297661: state=Running err=<nil>
	W1101 10:16:07.827624  702757 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:16:04.804748  699371 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v running-upgrade-821146:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.526518528s)
	I1101 10:16:04.804789  699371 kic.go:203] duration metric: took 5.526788 seconds to extract preloaded images to volume
	W1101 10:16:04.804922  699371 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 10:16:04.804963  699371 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 10:16:04.805012  699371 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:16:04.868007  699371 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-821146 --name running-upgrade-821146 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-821146 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-821146 --network running-upgrade-821146 --ip 192.168.85.2 --volume running-upgrade-821146:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1101 10:16:05.178886  699371 cli_runner.go:164] Run: docker container inspect running-upgrade-821146 --format={{.State.Running}}
	I1101 10:16:05.211284  699371 cli_runner.go:164] Run: docker container inspect running-upgrade-821146 --format={{.State.Status}}
	I1101 10:16:05.244553  699371 cli_runner.go:164] Run: docker exec running-upgrade-821146 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:16:05.318203  699371 oci.go:144] the created container "running-upgrade-821146" has a running status.
	I1101 10:16:05.318231  699371 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa...
	I1101 10:16:05.628004  699371 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:16:05.657472  699371 cli_runner.go:164] Run: docker container inspect running-upgrade-821146 --format={{.State.Status}}
	I1101 10:16:05.678809  699371 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:16:05.678823  699371 kic_runner.go:114] Args: [docker exec --privileged running-upgrade-821146 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:16:05.733288  699371 cli_runner.go:164] Run: docker container inspect running-upgrade-821146 --format={{.State.Status}}
	I1101 10:16:05.755728  699371 machine.go:88] provisioning docker machine ...
	I1101 10:16:05.755771  699371 ubuntu.go:169] provisioning hostname "running-upgrade-821146"
	I1101 10:16:05.755876  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:05.781379  699371 main.go:141] libmachine: Using SSH client type: native
	I1101 10:16:05.781936  699371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1101 10:16:05.781950  699371 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-821146 && echo "running-upgrade-821146" | sudo tee /etc/hostname
	I1101 10:16:05.925094  699371 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-821146
	
	I1101 10:16:05.925181  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:05.949781  699371 main.go:141] libmachine: Using SSH client type: native
	I1101 10:16:05.950288  699371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1101 10:16:05.950306  699371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-821146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-821146/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-821146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:16:06.079820  699371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:16:06.079874  699371 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:16:06.079918  699371 ubuntu.go:177] setting up certificates
	I1101 10:16:06.079930  699371 provision.go:83] configureAuth start
	I1101 10:16:06.079983  699371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-821146
	I1101 10:16:06.101602  699371 provision.go:138] copyHostCerts
	I1101 10:16:06.101656  699371 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:16:06.101663  699371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:16:06.101746  699371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:16:06.101899  699371 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:16:06.101906  699371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:16:06.101937  699371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:16:06.102030  699371 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:16:06.102035  699371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:16:06.102068  699371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:16:06.102132  699371 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-821146 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-821146]
	I1101 10:16:06.498831  699371 provision.go:172] copyRemoteCerts
	I1101 10:16:06.498925  699371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:16:06.498971  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:06.516584  699371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa Username:docker}
	I1101 10:16:06.605952  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:16:06.636427  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:16:06.667264  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 10:16:06.695934  699371 provision.go:86] duration metric: configureAuth took 615.989209ms
	I1101 10:16:06.695960  699371 ubuntu.go:193] setting minikube options for container-runtime
	I1101 10:16:06.696218  699371 config.go:182] Loaded profile config "running-upgrade-821146": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 10:16:06.696375  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:06.715868  699371 main.go:141] libmachine: Using SSH client type: native
	I1101 10:16:06.716373  699371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1101 10:16:06.716395  699371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:16:06.950746  699371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:16:06.950767  699371 machine.go:91] provisioned docker machine in 1.195021927s
	I1101 10:16:06.950778  699371 client.go:171] LocalClient.Create took 8.560598968s
	I1101 10:16:06.950796  699371 start.go:167] duration metric: libmachine.API.Create for "running-upgrade-821146" took 8.560661721s
	I1101 10:16:06.950805  699371 start.go:300] post-start starting for "running-upgrade-821146" (driver="docker")
	I1101 10:16:06.950818  699371 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:16:06.950901  699371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:16:06.950947  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:06.969330  699371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa Username:docker}
	I1101 10:16:07.061298  699371 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:16:07.065090  699371 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:16:07.065133  699371 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 10:16:07.065142  699371 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 10:16:07.065149  699371 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1101 10:16:07.065159  699371 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:16:07.065210  699371 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:16:07.065278  699371 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:16:07.065359  699371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:16:07.075732  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:16:07.105180  699371 start.go:303] post-start completed in 154.359527ms
	I1101 10:16:07.105587  699371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-821146
	I1101 10:16:07.123891  699371 profile.go:148] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/config.json ...
	I1101 10:16:07.124235  699371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:16:07.124288  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:07.142690  699371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa Username:docker}
	I1101 10:16:07.226281  699371 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:16:07.231432  699371 start.go:128] duration metric: createHost completed in 8.843291577s
	I1101 10:16:07.231453  699371 start.go:83] releasing machines lock for "running-upgrade-821146", held for 8.843488078s
	I1101 10:16:07.231545  699371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-821146
	I1101 10:16:07.249744  699371 ssh_runner.go:195] Run: cat /version.json
	I1101 10:16:07.249789  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:07.249808  699371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:16:07.249892  699371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-821146
	I1101 10:16:07.268819  699371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa Username:docker}
	I1101 10:16:07.270063  699371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/running-upgrade-821146/id_rsa Username:docker}
	I1101 10:16:07.440865  699371 ssh_runner.go:195] Run: systemctl --version
	I1101 10:16:07.445726  699371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:16:07.589134  699371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 10:16:07.594609  699371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:16:07.619435  699371 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1101 10:16:07.619521  699371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:16:07.654993  699371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1101 10:16:07.655019  699371 start.go:472] detecting cgroup driver to use...
	I1101 10:16:07.655059  699371 detect.go:199] detected "systemd" cgroup driver on host os
	I1101 10:16:07.655143  699371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:16:07.677632  699371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:16:07.691922  699371 docker.go:203] disabling cri-docker service (if available) ...
	I1101 10:16:07.691972  699371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:16:07.709598  699371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:16:07.726745  699371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:16:07.805634  699371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:16:07.885654  699371 docker.go:219] disabling docker service ...
	I1101 10:16:07.885705  699371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:16:07.905591  699371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:16:07.918794  699371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:16:07.989293  699371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:16:08.132903  699371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:16:08.145461  699371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:16:08.164287  699371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 10:16:08.164339  699371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:08.177787  699371 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:16:08.177872  699371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:08.190191  699371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:08.201934  699371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:08.213108  699371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:16:08.223785  699371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:16:08.233765  699371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:16:08.244368  699371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:16:08.365413  699371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:16:08.470286  699371 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:16:08.470355  699371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:16:08.474603  699371 start.go:540] Will wait 60s for crictl version
	I1101 10:16:08.474654  699371 ssh_runner.go:195] Run: which crictl
	I1101 10:16:08.478550  699371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 10:16:08.515102  699371 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1101 10:16:08.515175  699371 ssh_runner.go:195] Run: crio --version
	I1101 10:16:08.554366  699371 ssh_runner.go:195] Run: crio --version
	I1101 10:16:08.595283  699371 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1101 10:16:06.121569  701603 delete.go:124] DEMOLISHING missing-upgrade-489499 ...
	I1101 10:16:06.121702  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:06.142301  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	W1101 10:16:06.142369  701603 stop.go:83] unable to get state: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:06.142395  701603 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:06.142944  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:06.162545  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:06.162657  701603 delete.go:82] Unable to get host status for missing-upgrade-489499, assuming it has already been deleted: state: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:06.162716  701603 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-489499
	W1101 10:16:06.183056  701603 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-489499 returned with exit code 1
	I1101 10:16:06.183123  701603 kic.go:371] could not find the container missing-upgrade-489499 to remove it. will try anyways
	I1101 10:16:06.183189  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:06.204234  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	W1101 10:16:06.204314  701603 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:06.204376  701603 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-489499 /bin/bash -c "sudo init 0"
	W1101 10:16:06.225005  701603 cli_runner.go:211] docker exec --privileged -t missing-upgrade-489499 /bin/bash -c "sudo init 0" returned with exit code 1
	I1101 10:16:06.225043  701603 oci.go:659] error shutdown missing-upgrade-489499: docker exec --privileged -t missing-upgrade-489499 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:07.226188  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:07.245554  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:07.245619  701603 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:07.245630  701603 oci.go:673] temporary error: container missing-upgrade-489499 status is  but expect it to be exited
	I1101 10:16:07.245668  701603 retry.go:31] will retry after 476.905631ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:07.723058  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:07.746952  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:07.747034  701603 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:07.747046  701603 oci.go:673] temporary error: container missing-upgrade-489499 status is  but expect it to be exited
	I1101 10:16:07.747085  701603 retry.go:31] will retry after 581.344514ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:08.329508  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:08.349421  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:08.349499  701603 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:08.349530  701603 oci.go:673] temporary error: container missing-upgrade-489499 status is  but expect it to be exited
	I1101 10:16:08.349566  701603 retry.go:31] will retry after 1.157346557s: couldn't verify container is exited. %v: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:09.508073  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:09.526274  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:09.526348  701603 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:09.526365  701603 oci.go:673] temporary error: container missing-upgrade-489499 status is  but expect it to be exited
	I1101 10:16:09.526408  701603 retry.go:31] will retry after 1.54856021s: couldn't verify container is exited. %v: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:07.829026  702757 out.go:252] * Updating the running docker "pause-297661" container ...
	I1101 10:16:07.829062  702757 machine.go:94] provisionDockerMachine start ...
	I1101 10:16:07.829140  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:07.852768  702757 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:07.853094  702757 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1101 10:16:07.853110  702757 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:16:08.000432  702757 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-297661
	
	I1101 10:16:08.000472  702757 ubuntu.go:182] provisioning hostname "pause-297661"
	I1101 10:16:08.000529  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:08.021993  702757 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:08.022335  702757 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1101 10:16:08.022364  702757 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-297661 && echo "pause-297661" | sudo tee /etc/hostname
	I1101 10:16:08.177125  702757 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-297661
	
	I1101 10:16:08.177208  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:08.196195  702757 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:08.196432  702757 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1101 10:16:08.196449  702757 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-297661' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-297661/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-297661' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:16:08.342357  702757 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:16:08.342395  702757 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:16:08.342424  702757 ubuntu.go:190] setting up certificates
	I1101 10:16:08.342452  702757 provision.go:84] configureAuth start
	I1101 10:16:08.342521  702757 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-297661
	I1101 10:16:08.361973  702757 provision.go:143] copyHostCerts
	I1101 10:16:08.362036  702757 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:16:08.362057  702757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:16:08.362136  702757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:16:08.362280  702757 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:16:08.362294  702757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:16:08.362336  702757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:16:08.362416  702757 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:16:08.362426  702757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:16:08.362473  702757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:16:08.362549  702757 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.pause-297661 san=[127.0.0.1 192.168.76.2 localhost minikube pause-297661]
	I1101 10:16:08.795899  702757 provision.go:177] copyRemoteCerts
	I1101 10:16:08.795996  702757 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:16:08.796044  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:08.816009  702757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/pause-297661/id_rsa Username:docker}
	I1101 10:16:08.920173  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:16:08.940082  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 10:16:08.960222  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:16:08.980904  702757 provision.go:87] duration metric: took 638.43581ms to configureAuth
	I1101 10:16:08.980941  702757 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:16:08.981216  702757 config.go:182] Loaded profile config "pause-297661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:16:08.981338  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:09.000166  702757 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:09.000386  702757 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1101 10:16:09.000401  702757 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:16:09.304122  702757 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:16:09.304149  702757 machine.go:97] duration metric: took 1.475078336s to provisionDockerMachine
	I1101 10:16:09.304161  702757 start.go:293] postStartSetup for "pause-297661" (driver="docker")
	I1101 10:16:09.304170  702757 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:16:09.304228  702757 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:16:09.304311  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:09.324145  702757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/pause-297661/id_rsa Username:docker}
	I1101 10:16:09.428554  702757 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:16:09.432869  702757 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:16:09.432900  702757 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:16:09.432914  702757 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:16:09.432967  702757 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:16:09.433038  702757 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:16:09.433124  702757 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:16:09.441819  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:16:09.461853  702757 start.go:296] duration metric: took 157.654927ms for postStartSetup
	I1101 10:16:09.461970  702757 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:16:09.462038  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:09.480979  702757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/pause-297661/id_rsa Username:docker}
	I1101 10:16:09.581620  702757 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:16:09.587448  702757 fix.go:56] duration metric: took 1.779037944s for fixHost
	I1101 10:16:09.587490  702757 start.go:83] releasing machines lock for "pause-297661", held for 1.779110221s
	I1101 10:16:09.587562  702757 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-297661
	I1101 10:16:09.606754  702757 ssh_runner.go:195] Run: cat /version.json
	I1101 10:16:09.606799  702757 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:16:09.606820  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:09.606901  702757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-297661
	I1101 10:16:09.626512  702757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/pause-297661/id_rsa Username:docker}
	I1101 10:16:09.626831  702757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/pause-297661/id_rsa Username:docker}
	I1101 10:16:09.788502  702757 ssh_runner.go:195] Run: systemctl --version
	I1101 10:16:09.797226  702757 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:16:09.845533  702757 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:16:09.850881  702757 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:16:09.850976  702757 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:16:09.861784  702757 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:16:09.861814  702757 start.go:496] detecting cgroup driver to use...
	I1101 10:16:09.861866  702757 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:16:09.861921  702757 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:16:09.882374  702757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:16:09.899051  702757 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:16:09.899121  702757 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:16:09.917594  702757 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:16:09.932649  702757 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:16:10.075203  702757 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:16:10.203330  702757 docker.go:234] disabling docker service ...
	I1101 10:16:10.203406  702757 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:16:10.221454  702757 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:16:10.237590  702757 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:16:10.349195  702757 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:16:10.472669  702757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:16:10.487166  702757 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:16:10.502510  702757 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:16:10.502577  702757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.513388  702757 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:16:10.513464  702757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.523180  702757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.533259  702757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.543161  702757 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:16:10.552762  702757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.563122  702757 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.572468  702757 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:10.582341  702757 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:16:10.590528  702757 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:16:10.598594  702757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:16:10.746345  702757 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:16:10.904374  702757 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:16:10.904458  702757 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:16:10.908949  702757 start.go:564] Will wait 60s for crictl version
	I1101 10:16:10.909008  702757 ssh_runner.go:195] Run: which crictl
	I1101 10:16:10.912982  702757 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:16:10.944347  702757 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:16:10.944439  702757 ssh_runner.go:195] Run: crio --version
	I1101 10:16:10.977060  702757 ssh_runner.go:195] Run: crio --version
	I1101 10:16:11.012702  702757 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:16:06.352849  701851 out.go:252] * Restarting existing docker container for "stopped-upgrade-333944" ...
	I1101 10:16:06.352923  701851 cli_runner.go:164] Run: docker start stopped-upgrade-333944
	I1101 10:16:06.607992  701851 cli_runner.go:164] Run: docker container inspect stopped-upgrade-333944 --format={{.State.Status}}
	I1101 10:16:06.628021  701851 kic.go:430] container "stopped-upgrade-333944" state is running.
	I1101 10:16:06.628463  701851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-333944
	I1101 10:16:06.648515  701851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/config.json ...
	I1101 10:16:06.648787  701851 machine.go:94] provisionDockerMachine start ...
	I1101 10:16:06.648898  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:06.669489  701851 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:06.669873  701851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:16:06.669895  701851 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:16:06.670578  701851 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45894->127.0.0.1:33118: read: connection reset by peer
	I1101 10:16:09.791872  701851 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-333944
	
	I1101 10:16:09.791904  701851 ubuntu.go:182] provisioning hostname "stopped-upgrade-333944"
	I1101 10:16:09.791976  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:09.813075  701851 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:09.813410  701851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:16:09.813432  701851 main.go:143] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-333944 && echo "stopped-upgrade-333944" | sudo tee /etc/hostname
	I1101 10:16:09.953231  701851 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-333944
	
	I1101 10:16:09.953315  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:09.984184  701851 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:09.984545  701851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:16:09.984577  701851 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-333944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-333944/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-333944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:16:10.106308  701851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:16:10.106345  701851 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:16:10.106385  701851 ubuntu.go:190] setting up certificates
	I1101 10:16:10.106398  701851 provision.go:84] configureAuth start
	I1101 10:16:10.106483  701851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-333944
	I1101 10:16:10.132251  701851 provision.go:143] copyHostCerts
	I1101 10:16:10.132339  701851 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:16:10.132363  701851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:16:10.132444  701851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:16:10.132635  701851 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:16:10.132652  701851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:16:10.132699  701851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:16:10.132800  701851 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:16:10.132813  701851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:16:10.132870  701851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:16:10.132966  701851 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-333944 san=[127.0.0.1 192.168.94.2 localhost minikube stopped-upgrade-333944]
	I1101 10:16:10.302095  701851 provision.go:177] copyRemoteCerts
	I1101 10:16:10.302158  701851 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:16:10.302195  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:10.321017  701851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/stopped-upgrade-333944/id_rsa Username:docker}
	I1101 10:16:10.411145  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:16:10.436867  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 10:16:10.463918  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:16:10.490495  701851 provision.go:87] duration metric: took 384.060553ms to configureAuth
	I1101 10:16:10.490524  701851 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:16:10.490724  701851 config.go:182] Loaded profile config "stopped-upgrade-333944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 10:16:10.490933  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:10.510890  701851 main.go:143] libmachine: Using SSH client type: native
	I1101 10:16:10.511213  701851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:16:10.511256  701851 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:16:10.784043  701851 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:16:10.784073  701851 machine.go:97] duration metric: took 4.135267915s to provisionDockerMachine
	I1101 10:16:10.784089  701851 start.go:293] postStartSetup for "stopped-upgrade-333944" (driver="docker")
	I1101 10:16:10.784104  701851 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:16:10.784180  701851 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:16:10.784246  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:10.805012  701851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/stopped-upgrade-333944/id_rsa Username:docker}
	I1101 10:16:10.896783  701851 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:16:10.901066  701851 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:16:10.901105  701851 main.go:143] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 10:16:10.901117  701851 main.go:143] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 10:16:10.901126  701851 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1101 10:16:10.901140  701851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:16:10.901231  701851 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:16:10.901342  701851 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:16:10.901472  701851 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:16:10.913144  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:16:10.943724  701851 start.go:296] duration metric: took 159.615875ms for postStartSetup
	I1101 10:16:10.943830  701851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:16:10.943912  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:10.965855  701851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/stopped-upgrade-333944/id_rsa Username:docker}
	I1101 10:16:11.052100  701851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:16:11.057612  701851 fix.go:56] duration metric: took 4.724957844s for fixHost
	I1101 10:16:11.057646  701851 start.go:83] releasing machines lock for "stopped-upgrade-333944", held for 4.725017272s
	I1101 10:16:11.057750  701851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-333944
	I1101 10:16:11.078080  701851 ssh_runner.go:195] Run: cat /version.json
	I1101 10:16:11.078137  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:11.078203  701851 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:16:11.078303  701851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-333944
	I1101 10:16:11.099544  701851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/stopped-upgrade-333944/id_rsa Username:docker}
	I1101 10:16:11.100064  701851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/stopped-upgrade-333944/id_rsa Username:docker}
	I1101 10:16:11.013749  702757 cli_runner.go:164] Run: docker network inspect pause-297661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:16:11.031500  702757 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:16:11.036304  702757 kubeadm.go:884] updating cluster {Name:pause-297661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-297661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:16:11.036482  702757 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:16:11.036527  702757 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:16:11.072565  702757 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:16:11.072594  702757 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:16:11.072666  702757 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:16:11.107582  702757 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:16:11.107609  702757 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:16:11.107616  702757 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:16:11.107738  702757 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-297661 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-297661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:16:11.107823  702757 ssh_runner.go:195] Run: crio config
	I1101 10:16:11.173977  702757 cni.go:84] Creating CNI manager for ""
	I1101 10:16:11.174005  702757 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:16:11.174028  702757 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:16:11.174074  702757 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-297661 NodeName:pause-297661 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:16:11.174253  702757 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-297661"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:16:11.174339  702757 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:16:11.183787  702757 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:16:11.183872  702757 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:16:11.193049  702757 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1101 10:16:11.207551  702757 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:16:11.222031  702757 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1101 10:16:11.236266  702757 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:16:11.240796  702757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:16:11.386072  702757 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:16:11.401697  702757 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661 for IP: 192.168.76.2
	I1101 10:16:11.401719  702757 certs.go:195] generating shared ca certs ...
	I1101 10:16:11.401740  702757 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:11.401916  702757 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:16:11.401960  702757 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:16:11.401971  702757 certs.go:257] generating profile certs ...
	I1101 10:16:11.402077  702757 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/client.key
	I1101 10:16:11.402144  702757 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/apiserver.key.57c967b1
	I1101 10:16:11.402187  702757 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/proxy-client.key
	I1101 10:16:11.402305  702757 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:16:11.402352  702757 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:16:11.402363  702757 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:16:11.402388  702757 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:16:11.402412  702757 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:16:11.402438  702757 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:16:11.402480  702757 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:16:11.403217  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:16:11.424752  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:16:11.446715  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:16:11.467973  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:16:11.488705  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 10:16:11.511678  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:16:11.532379  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:16:11.552803  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:16:11.575984  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:16:11.595623  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:16:11.616628  702757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:16:11.636820  702757 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:16:11.654265  702757 ssh_runner.go:195] Run: openssl version
	I1101 10:16:11.662128  702757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:16:11.672327  702757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:11.676789  702757 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:11.676872  702757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:11.713780  702757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:16:11.723238  702757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:16:11.733307  702757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:16:11.741612  702757 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:16:11.741691  702757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:16:11.779720  702757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:16:11.790185  702757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:16:11.800988  702757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:16:11.805744  702757 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:16:11.805827  702757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:16:11.850866  702757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:16:11.860320  702757 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:16:11.864969  702757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:16:11.901057  702757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:16:11.940645  702757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:16:11.977684  702757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:16:12.017993  702757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:16:12.070203  702757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:16:12.107495  702757 kubeadm.go:401] StartCluster: {Name:pause-297661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-297661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:16:12.107666  702757 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:16:12.107751  702757 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:16:12.143186  702757 cri.go:89] found id: "540e6f288254c2f91c0b576e675ab75f176f33dc04857cd29478b2be023c0967"
	I1101 10:16:12.143211  702757 cri.go:89] found id: "ad61b10f8e140aeb0af6fd55e782e028e92c86d23d31f34a996fe6bee23d45e7"
	I1101 10:16:12.143217  702757 cri.go:89] found id: "11a7d411789fa6a12c87e30dddaad6f06e2d9ee1da69d65d8156525d726e8342"
	I1101 10:16:12.143221  702757 cri.go:89] found id: "0bd1538ac2657af6c6a5e8f373e61727a3b6a24642d5fc1bb8689a6cd54bc641"
	I1101 10:16:12.143225  702757 cri.go:89] found id: "24e09344febf421139bbbdae8d663120c3c223b397b6fa22e35806255e5a549b"
	I1101 10:16:12.143229  702757 cri.go:89] found id: "472cb4bf17c605290e55b8041352682602fbd3184fdcf7ae902cf8466aacac4c"
	I1101 10:16:12.143233  702757 cri.go:89] found id: "4cf89bdef43bcb6a8880f0173eb19d34c955c26650e304b2d61776b18a9f36c3"
	I1101 10:16:12.143237  702757 cri.go:89] found id: ""
	I1101 10:16:12.143290  702757 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:16:12.157926  702757 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:16:12Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:16:12.158041  702757 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:16:12.168802  702757 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:16:12.168827  702757 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:16:12.168902  702757 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:16:12.178272  702757 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:12.179142  702757 kubeconfig.go:125] found "pause-297661" server: "https://192.168.76.2:8443"
	I1101 10:16:12.180289  702757 kapi.go:59] client config for pause-297661: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/client.key", CAFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:16:12.180828  702757 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:16:12.180867  702757 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:16:12.180874  702757 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:16:12.180879  702757 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:16:12.180884  702757 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:16:12.181273  702757 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:16:12.190710  702757 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:16:12.190754  702757 kubeadm.go:602] duration metric: took 21.920894ms to restartPrimaryControlPlane
	I1101 10:16:12.190767  702757 kubeadm.go:403] duration metric: took 83.284759ms to StartCluster
	I1101 10:16:12.190791  702757 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:12.190881  702757 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:16:12.191880  702757 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:12.192211  702757 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:16:12.192289  702757 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:16:12.192428  702757 config.go:182] Loaded profile config "pause-297661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:16:12.195771  702757 out.go:179] * Enabled addons: 
	I1101 10:16:12.195777  702757 out.go:179] * Verifying Kubernetes components...
	W1101 10:16:11.281471  701851 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.32.0 -> Actual minikube version: v1.37.0
	I1101 10:16:11.281591  701851 ssh_runner.go:195] Run: systemctl --version
	I1101 10:16:11.291365  701851 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:16:11.436146  701851 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 10:16:11.442206  701851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:16:11.453568  701851 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1101 10:16:11.453650  701851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:16:11.464740  701851 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:16:11.464767  701851 start.go:496] detecting cgroup driver to use...
	I1101 10:16:11.464808  701851 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:16:11.464875  701851 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:16:11.480652  701851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:16:11.494788  701851 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:16:11.494872  701851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:16:11.509775  701851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:16:11.524327  701851 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:16:11.612398  701851 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:16:11.687117  701851 docker.go:234] disabling docker service ...
	I1101 10:16:11.687185  701851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:16:11.701436  701851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:16:11.714461  701851 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:16:11.782649  701851 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:16:11.864451  701851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:16:11.878041  701851 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:16:11.897473  701851 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 10:16:11.897541  701851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.909350  701851 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:16:11.909428  701851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.921813  701851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.933480  701851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.945803  701851 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:16:11.956476  701851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.968070  701851 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.986964  701851 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:16:11.998284  701851 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:16:12.008856  701851 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:16:12.019718  701851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:16:12.097482  701851 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:16:12.208745  701851 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:16:12.208815  701851 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:16:12.213688  701851 start.go:564] Will wait 60s for crictl version
	I1101 10:16:12.213767  701851 ssh_runner.go:195] Run: which crictl
	I1101 10:16:12.218352  701851 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 10:16:12.260074  701851 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1101 10:16:12.260162  701851 ssh_runner.go:195] Run: crio --version
	I1101 10:16:12.297205  701851 ssh_runner.go:195] Run: crio --version
	I1101 10:16:12.338661  701851 out.go:179] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1101 10:16:12.196742  702757 addons.go:515] duration metric: took 4.466984ms for enable addons: enabled=[]
	I1101 10:16:12.196787  702757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:16:12.351176  702757 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:16:12.366625  702757 node_ready.go:35] waiting up to 6m0s for node "pause-297661" to be "Ready" ...
	I1101 10:16:12.374920  702757 node_ready.go:49] node "pause-297661" is "Ready"
	I1101 10:16:12.374952  702757 node_ready.go:38] duration metric: took 8.284808ms for node "pause-297661" to be "Ready" ...
	I1101 10:16:12.374971  702757 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:16:12.375032  702757 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:16:12.388511  702757 api_server.go:72] duration metric: took 196.249946ms to wait for apiserver process to appear ...
	I1101 10:16:12.388545  702757 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:16:12.388574  702757 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:16:12.392829  702757 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:16:12.393793  702757 api_server.go:141] control plane version: v1.34.1
	I1101 10:16:12.393820  702757 api_server.go:131] duration metric: took 5.268141ms to wait for apiserver health ...
	I1101 10:16:12.393831  702757 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:16:12.397406  702757 system_pods.go:59] 7 kube-system pods found
	I1101 10:16:12.397466  702757 system_pods.go:61] "coredns-66bc5c9577-sdhft" [1680b086-3fa8-4b80-9705-650dcd1f0da2] Running
	I1101 10:16:12.397476  702757 system_pods.go:61] "etcd-pause-297661" [004f1413-5456-4433-a27c-e6d6cdebbeb7] Running
	I1101 10:16:12.397482  702757 system_pods.go:61] "kindnet-vlk6r" [263025a4-2ce5-48bc-805a-20a2a35bb5f2] Running
	I1101 10:16:12.397488  702757 system_pods.go:61] "kube-apiserver-pause-297661" [dd149e49-01fe-48e9-bb94-ba6f69de3812] Running
	I1101 10:16:12.397494  702757 system_pods.go:61] "kube-controller-manager-pause-297661" [0d0a0202-ad04-4392-af68-d0691f7cfb69] Running
	I1101 10:16:12.397505  702757 system_pods.go:61] "kube-proxy-5mqgt" [4c409377-301d-463a-8a0e-beb0afb959c7] Running
	I1101 10:16:12.397510  702757 system_pods.go:61] "kube-scheduler-pause-297661" [566a3183-af28-4b5c-a6da-ff5231371114] Running
	I1101 10:16:12.397520  702757 system_pods.go:74] duration metric: took 3.668609ms to wait for pod list to return data ...
	I1101 10:16:12.397535  702757 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:16:12.400095  702757 default_sa.go:45] found service account: "default"
	I1101 10:16:12.400126  702757 default_sa.go:55] duration metric: took 2.582356ms for default service account to be created ...
	I1101 10:16:12.400143  702757 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:16:12.403510  702757 system_pods.go:86] 7 kube-system pods found
	I1101 10:16:12.403546  702757 system_pods.go:89] "coredns-66bc5c9577-sdhft" [1680b086-3fa8-4b80-9705-650dcd1f0da2] Running
	I1101 10:16:12.403555  702757 system_pods.go:89] "etcd-pause-297661" [004f1413-5456-4433-a27c-e6d6cdebbeb7] Running
	I1101 10:16:12.403560  702757 system_pods.go:89] "kindnet-vlk6r" [263025a4-2ce5-48bc-805a-20a2a35bb5f2] Running
	I1101 10:16:12.403566  702757 system_pods.go:89] "kube-apiserver-pause-297661" [dd149e49-01fe-48e9-bb94-ba6f69de3812] Running
	I1101 10:16:12.403571  702757 system_pods.go:89] "kube-controller-manager-pause-297661" [0d0a0202-ad04-4392-af68-d0691f7cfb69] Running
	I1101 10:16:12.403576  702757 system_pods.go:89] "kube-proxy-5mqgt" [4c409377-301d-463a-8a0e-beb0afb959c7] Running
	I1101 10:16:12.403581  702757 system_pods.go:89] "kube-scheduler-pause-297661" [566a3183-af28-4b5c-a6da-ff5231371114] Running
	I1101 10:16:12.403592  702757 system_pods.go:126] duration metric: took 3.440505ms to wait for k8s-apps to be running ...
	I1101 10:16:12.403606  702757 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:16:12.403662  702757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:16:12.419217  702757 system_svc.go:56] duration metric: took 15.595949ms WaitForService to wait for kubelet
	I1101 10:16:12.419252  702757 kubeadm.go:587] duration metric: took 226.997941ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:16:12.419293  702757 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:16:12.422402  702757 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:16:12.422436  702757 node_conditions.go:123] node cpu capacity is 8
	I1101 10:16:12.422452  702757 node_conditions.go:105] duration metric: took 3.152854ms to run NodePressure ...
	I1101 10:16:12.422469  702757 start.go:242] waiting for startup goroutines ...
	I1101 10:16:12.422480  702757 start.go:247] waiting for cluster config update ...
	I1101 10:16:12.422490  702757 start.go:256] writing updated cluster config ...
	I1101 10:16:12.422873  702757 ssh_runner.go:195] Run: rm -f paused
	I1101 10:16:12.427386  702757 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:16:12.428018  702757 kapi.go:59] client config for pause-297661: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/pause-297661/client.key", CAFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:16:12.431429  702757 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sdhft" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:12.436393  702757 pod_ready.go:94] pod "coredns-66bc5c9577-sdhft" is "Ready"
	I1101 10:16:12.436421  702757 pod_ready.go:86] duration metric: took 4.968434ms for pod "coredns-66bc5c9577-sdhft" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:12.438702  702757 pod_ready.go:83] waiting for pod "etcd-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:12.443390  702757 pod_ready.go:94] pod "etcd-pause-297661" is "Ready"
	I1101 10:16:12.443422  702757 pod_ready.go:86] duration metric: took 4.688891ms for pod "etcd-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:12.445715  702757 pod_ready.go:83] waiting for pod "kube-apiserver-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:12.450509  702757 pod_ready.go:94] pod "kube-apiserver-pause-297661" is "Ready"
	I1101 10:16:12.450538  702757 pod_ready.go:86] duration metric: took 4.797086ms for pod "kube-apiserver-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:12.452691  702757 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:08.596290  699371 cli_runner.go:164] Run: docker network inspect running-upgrade-821146 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:16:08.615057  699371 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:16:08.619288  699371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:16:08.632234  699371 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 10:16:08.632310  699371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:16:08.700381  699371 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 10:16:08.700398  699371 crio.go:415] Images already preloaded, skipping extraction
	I1101 10:16:08.700460  699371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:16:08.739505  699371 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 10:16:08.739523  699371 cache_images.go:84] Images are preloaded, skipping loading
	I1101 10:16:08.739585  699371 ssh_runner.go:195] Run: crio config
	I1101 10:16:08.784002  699371 cni.go:84] Creating CNI manager for ""
	I1101 10:16:08.784020  699371 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:16:08.784045  699371 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 10:16:08.784067  699371 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-821146 NodeName:running-upgrade-821146 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:16:08.784225  699371 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "running-upgrade-821146"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:16:08.784287  699371 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=running-upgrade-821146 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-821146 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 10:16:08.784343  699371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 10:16:08.794771  699371 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:16:08.794863  699371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:16:08.804816  699371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1101 10:16:08.824898  699371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:16:08.847095  699371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1101 10:16:08.867313  699371 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:16:08.871578  699371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:16:08.884177  699371 certs.go:56] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146 for IP: 192.168.85.2
	I1101 10:16:08.884218  699371 certs.go:190] acquiring lock for shared ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:08.884382  699371 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:16:08.884417  699371 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:16:08.884459  699371 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/client.key
	I1101 10:16:08.884468  699371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/client.crt with IP's: []
	I1101 10:16:09.018306  699371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/client.crt ...
	I1101 10:16:09.018324  699371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/client.crt: {Name:mkebb948426e0df207ca499f0bf3906116d6ac56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:09.018532  699371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/client.key ...
	I1101 10:16:09.018591  699371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/client.key: {Name:mk6402d3ca5bae4d9ebd11f18db1c42a81b05ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:09.018691  699371 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.key.43b9df8c
	I1101 10:16:09.018701  699371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 10:16:09.164681  699371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.crt.43b9df8c ...
	I1101 10:16:09.164699  699371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.crt.43b9df8c: {Name:mk3ee3cd5185c3e81e853ca95204110a187312f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:09.164882  699371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.key.43b9df8c ...
	I1101 10:16:09.164892  699371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.key.43b9df8c: {Name:mk8aa052fbf9204e6e1f2ad1c3fb3404e44232f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:09.164962  699371 certs.go:337] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.crt
	I1101 10:16:09.165033  699371 certs.go:341] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.key
	I1101 10:16:09.165079  699371 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.key
	I1101 10:16:09.165088  699371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.crt with IP's: []
	I1101 10:16:09.284135  699371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.crt ...
	I1101 10:16:09.284153  699371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.crt: {Name:mk475c901dc2d91b0c1db1c5b6f81a461bff5868 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:09.284784  699371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.key ...
	I1101 10:16:09.284798  699371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.key: {Name:mk2db181e43018b8dd5dbaef19b77899d02377bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:09.285066  699371 certs.go:437] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:16:09.285119  699371 certs.go:433] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:16:09.285135  699371 certs.go:437] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:16:09.285165  699371 certs.go:437] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:16:09.285193  699371 certs.go:437] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:16:09.285225  699371 certs.go:437] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:16:09.285299  699371 certs.go:437] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:16:09.286247  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 10:16:09.314968  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:16:09.342938  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:16:09.369961  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/running-upgrade-821146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:16:09.396624  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:16:09.424216  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:16:09.452710  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:16:09.481172  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:16:09.508188  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:16:09.538993  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:16:09.565324  699371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:16:09.593526  699371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:16:09.615194  699371 ssh_runner.go:195] Run: openssl version
	I1101 10:16:09.622020  699371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:16:09.635222  699371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:16:09.639518  699371 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:16:09.639580  699371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:16:09.647592  699371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:16:09.659265  699371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:16:09.669863  699371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:09.674262  699371 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:09.674334  699371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:09.682772  699371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:16:09.693544  699371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:16:09.704242  699371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:16:09.708035  699371 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:16:09.708102  699371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:16:09.715556  699371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:16:09.726628  699371 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 10:16:09.730778  699371 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 10:16:09.730849  699371 kubeadm.go:404] StartCluster: {Name:running-upgrade-821146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-821146 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 10:16:09.730930  699371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:16:09.731019  699371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:16:09.770023  699371 cri.go:89] found id: ""
	I1101 10:16:09.770092  699371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:16:09.780138  699371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:16:09.790516  699371 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:16:09.790578  699371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:16:09.801915  699371 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:16:09.801958  699371 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:16:09.904917  699371 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 10:16:09.986131  699371 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:16:12.831870  702757 pod_ready.go:94] pod "kube-controller-manager-pause-297661" is "Ready"
	I1101 10:16:12.831901  702757 pod_ready.go:86] duration metric: took 379.183696ms for pod "kube-controller-manager-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:13.032376  702757 pod_ready.go:83] waiting for pod "kube-proxy-5mqgt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:13.432411  702757 pod_ready.go:94] pod "kube-proxy-5mqgt" is "Ready"
	I1101 10:16:13.432440  702757 pod_ready.go:86] duration metric: took 400.034314ms for pod "kube-proxy-5mqgt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:13.631923  702757 pod_ready.go:83] waiting for pod "kube-scheduler-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:14.031711  702757 pod_ready.go:94] pod "kube-scheduler-pause-297661" is "Ready"
	I1101 10:16:14.031747  702757 pod_ready.go:86] duration metric: took 399.79457ms for pod "kube-scheduler-pause-297661" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:16:14.031762  702757 pod_ready.go:40] duration metric: took 1.604339868s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:16:14.079955  702757 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:16:14.081598  702757 out.go:179] * Done! kubectl is now configured to use "pause-297661" cluster and "default" namespace by default
	I1101 10:16:11.075221  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:11.097295  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:11.097367  701603 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:11.097385  701603 oci.go:673] temporary error: container missing-upgrade-489499 status is  but expect it to be exited
	I1101 10:16:11.097420  701603 retry.go:31] will retry after 2.299770178s: couldn't verify container is exited. %v: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:13.397970  701603 cli_runner.go:164] Run: docker container inspect missing-upgrade-489499 --format={{.State.Status}}
	W1101 10:16:13.416971  701603 cli_runner.go:211] docker container inspect missing-upgrade-489499 --format={{.State.Status}} returned with exit code 1
	I1101 10:16:13.417035  701603 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:13.417045  701603 oci.go:673] temporary error: container missing-upgrade-489499 status is  but expect it to be exited
	I1101 10:16:13.417071  701603 retry.go:31] will retry after 4.406936807s: couldn't verify container is exited. %v: unknown state "missing-upgrade-489499": docker container inspect missing-upgrade-489499 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-489499
	I1101 10:16:12.339671  701851 cli_runner.go:164] Run: docker network inspect stopped-upgrade-333944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:16:12.359438  701851 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 10:16:12.363686  701851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:16:12.377246  701851 kubeadm.go:884] updating cluster {Name:stopped-upgrade-333944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-333944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:16:12.377387  701851 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 10:16:12.377461  701851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:16:12.427006  701851 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:16:12.427032  701851 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:16:12.427084  701851 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:16:12.468178  701851 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:16:12.468201  701851 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:16:12.468212  701851 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.3 crio true true} ...
	I1101 10:16:12.468328  701851 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-333944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-333944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:16:12.468397  701851 ssh_runner.go:195] Run: crio config
	I1101 10:16:12.516599  701851 cni.go:84] Creating CNI manager for ""
	I1101 10:16:12.516622  701851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:16:12.516660  701851 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:16:12.516696  701851 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-333944 NodeName:stopped-upgrade-333944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:16:12.516895  701851 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-333944"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:16:12.516978  701851 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 10:16:12.527182  701851 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:16:12.527261  701851 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:16:12.537459  701851 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 10:16:12.557409  701851 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:16:12.577191  701851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1101 10:16:12.599524  701851 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:16:12.603440  701851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:16:12.616384  701851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:16:12.686057  701851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:16:12.716434  701851 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944 for IP: 192.168.94.2
	I1101 10:16:12.716461  701851 certs.go:195] generating shared ca certs ...
	I1101 10:16:12.716486  701851 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:12.716650  701851 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:16:12.716688  701851 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:16:12.716698  701851 certs.go:257] generating profile certs ...
	I1101 10:16:12.716818  701851 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/client.key
	I1101 10:16:12.716874  701851 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.key.30e2cb39
	I1101 10:16:12.716892  701851 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.crt.30e2cb39 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1101 10:16:13.013363  701851 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.crt.30e2cb39 ...
	I1101 10:16:13.013403  701851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.crt.30e2cb39: {Name:mk3b5ec04d1c7859f7248b1b748749b10f12813e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:13.013629  701851 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.key.30e2cb39 ...
	I1101 10:16:13.013652  701851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.key.30e2cb39: {Name:mkacc5dce1c72baecbfce14bbf129eb0f38259b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:13.013765  701851 certs.go:382] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.crt.30e2cb39 -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.crt
	I1101 10:16:13.013982  701851 certs.go:386] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.key.30e2cb39 -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.key
	I1101 10:16:13.014198  701851 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/proxy-client.key
	I1101 10:16:13.014347  701851 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:16:13.014393  701851 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:16:13.014407  701851 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:16:13.014439  701851 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:16:13.014474  701851 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:16:13.014511  701851 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:16:13.014568  701851 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:16:13.015193  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:16:13.043499  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:16:13.070630  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:16:13.097892  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:16:13.126004  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:16:13.153740  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:16:13.181473  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:16:13.208934  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:16:13.236773  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:16:13.265284  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:16:13.295326  701851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:16:13.323167  701851 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:16:13.344130  701851 ssh_runner.go:195] Run: openssl version
	I1101 10:16:13.350508  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:16:13.361695  701851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:16:13.365755  701851 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:16:13.365821  701851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:16:13.373089  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:16:13.383598  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:16:13.394961  701851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:13.399250  701851 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:13.399314  701851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:16:13.407144  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:16:13.419351  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:16:13.431908  701851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:16:13.436652  701851 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:16:13.436714  701851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:16:13.444463  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:16:13.454902  701851 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:16:13.459325  701851 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:16:13.467428  701851 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:16:13.475219  701851 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:16:13.483058  701851 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:16:13.491354  701851 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:16:13.499890  701851 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:16:13.507652  701851 kubeadm.go:401] StartCluster: {Name:stopped-upgrade-333944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-333944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:16:13.507742  701851 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:16:13.507807  701851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:16:13.550876  701851 cri.go:89] found id: ""
	I1101 10:16:13.550950  701851 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W1101 10:16:13.562685  701851 kubeadm.go:414] apiserver tunnel failed: apiserver port not set
	I1101 10:16:13.562715  701851 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:16:13.562723  701851 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:16:13.562775  701851 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:16:13.573943  701851 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:13.574624  701851 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-333944" does not appear in /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:16:13.575107  701851 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-514161/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-333944" cluster setting kubeconfig missing "stopped-upgrade-333944" context setting]
	I1101 10:16:13.575732  701851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:16:13.576494  701851 kapi.go:59] client config for stopped-upgrade-333944: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/profiles/stopped-upgrade-333944/client.key", CAFile:"/home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:16:13.576899  701851 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:16:13.576913  701851 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:16:13.576917  701851 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:16:13.576921  701851 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:16:13.576924  701851 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:16:13.577286  701851 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:16:13.587872  701851 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-01 10:15:50.528118420 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-01 10:16:12.596531486 +0000
	@@ -50,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: systemd
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I1101 10:16:13.587894  701851 kubeadm.go:1161] stopping kube-system containers ...
	I1101 10:16:13.587909  701851 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 10:16:13.587961  701851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:16:13.629731  701851 cri.go:89] found id: ""
	I1101 10:16:13.629820  701851 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 10:16:13.644473  701851 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:16:13.656215  701851 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Nov  1 10:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov  1 10:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Nov  1 10:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov  1 10:15 /etc/kubernetes/scheduler.conf
	
	I1101 10:16:13.656287  701851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1101 10:16:13.668295  701851 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:13.668366  701851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:16:13.678955  701851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1101 10:16:13.690497  701851 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:13.690562  701851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:16:13.701497  701851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1101 10:16:13.712114  701851 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:13.712198  701851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:16:13.722460  701851 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1101 10:16:13.733124  701851 kubeadm.go:164] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:13.733184  701851 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:16:13.743829  701851 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:16:13.754675  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:16:13.814900  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:16:14.707071  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:16:14.893553  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:16:14.966450  701851 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:16:15.041070  701851 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:16:15.041164  701851 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:16:15.541591  701851 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:16:16.041955  701851 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:16:16.062370  701851 api_server.go:72] duration metric: took 1.021316236s to wait for apiserver process to appear ...
	I1101 10:16:16.062394  701851 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:16:16.062414  701851 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	
	
	==> CRI-O <==
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.845388974Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.846238103Z" level=info msg="Conmon does support the --sync option"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.846256487Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.846269728Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.846977274Z" level=info msg="Conmon does support the --sync option"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.846993715Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.851169164Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.851203449Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.851808429Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.852328265Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.852396055Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.858331013Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.89779886Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-sdhft Namespace:kube-system ID:2d6173903cf69fd71a52f980550120f31b77ecd258d533ec4380ab058a5e9104 UID:1680b086-3fa8-4b80-9705-650dcd1f0da2 NetNS:/var/run/netns/d8bb2587-6396-4438-9db9-43295182b658 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00053c0b8}] Aliases:map[]}"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898062459Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-sdhft for CNI network kindnet (type=ptp)"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898611834Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898634039Z" level=info msg="Starting seccomp notifier watcher"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898683389Z" level=info msg="Create NRI interface"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898811047Z" level=info msg="built-in NRI default validator is disabled"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898822189Z" level=info msg="runtime interface created"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898861498Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.89887145Z" level=info msg="runtime interface starting up..."
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898879815Z" level=info msg="starting plugins..."
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.898894911Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 01 10:16:10 pause-297661 crio[2200]: time="2025-11-01T10:16:10.899306442Z" level=info msg="No systemd watchdog enabled"
	Nov 01 10:16:10 pause-297661 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	540e6f288254c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   2d6173903cf69       coredns-66bc5c9577-sdhft               kube-system
	ad61b10f8e140       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                0                   321500a0559f0       kube-proxy-5mqgt                       kube-system
	11a7d411789fa       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   a9022fc85dd84       kindnet-vlk6r                          kube-system
	0bd1538ac2657       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   43 seconds ago      Running             kube-controller-manager   0                   41786b85536e3       kube-controller-manager-pause-297661   kube-system
	24e09344febf4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   43 seconds ago      Running             kube-scheduler            0                   90ed41db61fc5       kube-scheduler-pause-297661            kube-system
	472cb4bf17c60       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   43 seconds ago      Running             kube-apiserver            0                   2fb0cdec64231       kube-apiserver-pause-297661            kube-system
	4cf89bdef43bc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   43 seconds ago      Running             etcd                      0                   ce4b61eb3b0b5       etcd-pause-297661                      kube-system
	
	
	==> coredns [540e6f288254c2f91c0b576e675ab75f176f33dc04857cd29478b2be023c0967] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35693 - 28569 "HINFO IN 7267284124165664637.5832267534672565079. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.078967336s
	
	
	==> describe nodes <==
	Name:               pause-297661
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-297661
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=pause-297661
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_15_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:15:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-297661
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:16:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:16:04 +0000   Sat, 01 Nov 2025 10:15:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:16:04 +0000   Sat, 01 Nov 2025 10:15:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:16:04 +0000   Sat, 01 Nov 2025 10:15:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:16:04 +0000   Sat, 01 Nov 2025 10:16:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-297661
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                217a4a88-d1dc-46a4-b597-55c22a5e81c2
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-sdhft                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-297661                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-vlk6r                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-297661             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-pause-297661    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-5mqgt                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-297661             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node pause-297661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node pause-297661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node pause-297661 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s                kubelet          Node pause-297661 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s                kubelet          Node pause-297661 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s                kubelet          Node pause-297661 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node pause-297661 event: Registered Node pause-297661 in Controller
	  Normal  NodeReady                15s                kubelet          Node pause-297661 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [4cf89bdef43bcb6a8880f0173eb19d34c955c26650e304b2d61776b18a9f36c3] <==
	{"level":"warn","ts":"2025-11-01T10:15:44.008098Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"243.235523ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356351100809765 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:kube-scheduler\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:kube-scheduler\" value_size:1768 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-01T10:15:44.008184Z","caller":"traceutil/trace.go:172","msg":"trace[1878441020] transaction","detail":"{read_only:false; response_revision:101; number_of_response:1; }","duration":"350.244899ms","start":"2025-11-01T10:15:43.657926Z","end":"2025-11-01T10:15:44.008171Z","steps":["trace[1878441020] 'process raft request'  (duration: 106.88717ms)","trace[1878441020] 'compare'  (duration: 243.076018ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:15:44.008228Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:15:43.657903Z","time spent":"350.30848ms","remote":"127.0.0.1:55496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1820,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:kube-scheduler\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:kube-scheduler\" value_size:1768 >> failure:<>"}
	{"level":"warn","ts":"2025-11-01T10:15:44.434563Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"251.393563ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356351100809767 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:controller:attachdetach-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:controller:attachdetach-controller\" value_size:865 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-01T10:15:44.434636Z","caller":"traceutil/trace.go:172","msg":"trace[755911] linearizableReadLoop","detail":"{readStateIndex:106; appliedIndex:105; }","duration":"133.07005ms","start":"2025-11-01T10:15:44.301555Z","end":"2025-11-01T10:15:44.434625Z","steps":["trace[755911] 'read index received'  (duration: 41.009µs)","trace[755911] 'applied index is now lower than readState.Index'  (duration: 133.028519ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:15:44.434690Z","caller":"traceutil/trace.go:172","msg":"trace[608754834] transaction","detail":"{read_only:false; response_revision:102; number_of_response:1; }","duration":"421.922905ms","start":"2025-11-01T10:15:44.012719Z","end":"2025-11-01T10:15:44.434642Z","steps":["trace[608754834] 'process raft request'  (duration: 170.403348ms)","trace[608754834] 'compare'  (duration: 251.26647ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:15:44.434739Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.181975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-01T10:15:44.434770Z","caller":"traceutil/trace.go:172","msg":"trace[630951225] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:102; }","duration":"133.219197ms","start":"2025-11-01T10:15:44.301542Z","end":"2025-11-01T10:15:44.434762Z","steps":["trace[630951225] 'agreement among raft nodes before linearized reading'  (duration: 133.146566ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:15:44.434802Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:15:44.012701Z","time spent":"422.053583ms","remote":"127.0.0.1:55496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":937,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:controller:attachdetach-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:controller:attachdetach-controller\" value_size:865 >> failure:<>"}
	{"level":"warn","ts":"2025-11-01T10:15:44.691814Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.904491ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356351100809773 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-297661.1873da82bef39798\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-297661.1873da82bef39798\" value_size:544 lease:6414984314246033960 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-01T10:15:44.692051Z","caller":"traceutil/trace.go:172","msg":"trace[353057931] transaction","detail":"{read_only:false; response_revision:105; number_of_response:1; }","duration":"239.388556ms","start":"2025-11-01T10:15:44.452653Z","end":"2025-11-01T10:15:44.692041Z","steps":["trace[353057931] 'process raft request'  (duration: 239.34498ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:15:44.692061Z","caller":"traceutil/trace.go:172","msg":"trace[1926171201] transaction","detail":"{read_only:false; response_revision:104; number_of_response:1; }","duration":"249.727729ms","start":"2025-11-01T10:15:44.442320Z","end":"2025-11-01T10:15:44.692048Z","steps":["trace[1926171201] 'process raft request'  (duration: 249.627974ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:15:44.692063Z","caller":"traceutil/trace.go:172","msg":"trace[1163522496] transaction","detail":"{read_only:false; response_revision:103; number_of_response:1; }","duration":"251.897638ms","start":"2025-11-01T10:15:44.440138Z","end":"2025-11-01T10:15:44.692035Z","steps":["trace[1163522496] 'process raft request'  (duration: 119.726489ms)","trace[1163522496] 'compare'  (duration: 131.803742ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:15:44.788476Z","caller":"traceutil/trace.go:172","msg":"trace[1301626639] transaction","detail":"{read_only:false; response_revision:106; number_of_response:1; }","duration":"144.748934ms","start":"2025-11-01T10:15:44.643704Z","end":"2025-11-01T10:15:44.788453Z","steps":["trace[1301626639] 'process raft request'  (duration: 144.64552ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:15:45.025255Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.430804ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356351100809778 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-297661.1873da82bfcc508d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-297661.1873da82bfcc508d\" value_size:598 lease:6414984314246033960 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-01T10:15:45.025400Z","caller":"traceutil/trace.go:172","msg":"trace[1101190000] transaction","detail":"{read_only:false; response_revision:108; number_of_response:1; }","duration":"234.299987ms","start":"2025-11-01T10:15:44.791088Z","end":"2025-11-01T10:15:45.025388Z","steps":["trace[1101190000] 'process raft request'  (duration: 234.255895ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:15:45.025465Z","caller":"traceutil/trace.go:172","msg":"trace[1161192082] transaction","detail":"{read_only:false; response_revision:107; number_of_response:1; }","duration":"330.046455ms","start":"2025-11-01T10:15:44.695390Z","end":"2025-11-01T10:15:45.025437Z","steps":["trace[1161192082] 'process raft request'  (duration: 206.367236ms)","trace[1161192082] 'compare'  (duration: 123.324064ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:15:45.025573Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:15:44.695375Z","time spent":"330.156011ms","remote":"127.0.0.1:54912","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":670,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-297661.1873da82bfcc508d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-297661.1873da82bfcc508d\" value_size:598 lease:6414984314246033960 >> failure:<>"}
	{"level":"info","ts":"2025-11-01T10:15:45.135121Z","caller":"traceutil/trace.go:172","msg":"trace[774693436] transaction","detail":"{read_only:false; response_revision:110; number_of_response:1; }","duration":"105.448573ms","start":"2025-11-01T10:15:45.029653Z","end":"2025-11-01T10:15:45.135101Z","steps":["trace[774693436] 'process raft request'  (duration: 105.40472ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:15:45.135168Z","caller":"traceutil/trace.go:172","msg":"trace[709169259] transaction","detail":"{read_only:false; response_revision:109; number_of_response:1; }","duration":"107.14028ms","start":"2025-11-01T10:15:45.027988Z","end":"2025-11-01T10:15:45.135128Z","steps":["trace[709169259] 'process raft request'  (duration: 102.355198ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:16:04.656697Z","caller":"traceutil/trace.go:172","msg":"trace[1659511800] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"212.729695ms","start":"2025-11-01T10:16:04.443945Z","end":"2025-11-01T10:16:04.656674Z","steps":["trace[1659511800] 'process raft request'  (duration: 212.569634ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:16:04.778298Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.338118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:16:04.778384Z","caller":"traceutil/trace.go:172","msg":"trace[726824992] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:420; }","duration":"120.434958ms","start":"2025-11-01T10:16:04.657929Z","end":"2025-11-01T10:16:04.778364Z","steps":["trace[726824992] 'agreement among raft nodes before linearized reading'  (duration: 60.704216ms)","trace[726824992] 'range keys from in-memory index tree'  (duration: 59.598425ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:16:04.778447Z","caller":"traceutil/trace.go:172","msg":"trace[452978060] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"331.325731ms","start":"2025-11-01T10:16:04.447099Z","end":"2025-11-01T10:16:04.778425Z","steps":["trace[452978060] 'process raft request'  (duration: 271.534053ms)","trace[452978060] 'compare'  (duration: 59.611955ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:16:04.778788Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:16:04.447079Z","time spent":"331.446048ms","remote":"127.0.0.1:55132","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5421,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/pause-297661\" mod_revision:340 > success:<request_put:<key:\"/registry/minions/pause-297661\" value_size:5383 >> failure:<request_range:<key:\"/registry/minions/pause-297661\" > >"}
	
	
	==> kernel <==
	 10:16:19 up  2:58,  0 user,  load average: 5.53, 1.98, 2.08
	Linux pause-297661 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [11a7d411789fa6a12c87e30dddaad6f06e2d9ee1da69d65d8156525d726e8342] <==
	I1101 10:15:53.788812       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:15:53.854915       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:15:53.855079       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:15:53.855096       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:15:53.855136       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:15:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:15:54.057558       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:15:54.057583       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:15:54.057597       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:15:54.155913       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:15:54.457773       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:15:54.457799       1 metrics.go:72] Registering metrics
	I1101 10:15:54.457911       1 controller.go:711] "Syncing nftables rules"
	I1101 10:16:04.057855       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:16:04.057955       1 main.go:301] handling current node
	I1101 10:16:14.060267       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:16:14.060310       1 main.go:301] handling current node
	
	
	==> kube-apiserver [472cb4bf17c605290e55b8041352682602fbd3184fdcf7ae902cf8466aacac4c] <==
	I1101 10:15:39.099781       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:15:39.099819       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:15:39.100227       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:15:39.107106       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:15:39.111164       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:15:39.123686       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:15:39.126021       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:15:39.137239       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:15:40.058251       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:15:40.129534       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:15:40.129634       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:15:45.580151       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:15:45.640039       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:15:45.708138       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:15:45.716914       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 10:15:45.718886       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:15:45.725483       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:15:46.088073       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:15:46.815391       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:15:46.826442       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:15:46.835419       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:15:51.982601       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:15:52.032285       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:15:52.037987       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:15:52.181676       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [0bd1538ac2657af6c6a5e8f373e61727a3b6a24642d5fc1bb8689a6cd54bc641] <==
	I1101 10:15:51.077016       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:15:51.078173       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:15:51.078268       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:15:51.078290       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:15:51.078365       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:15:51.078366       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:15:51.078511       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:15:51.078533       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:15:51.078682       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:15:51.079030       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:15:51.079037       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:15:51.079140       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:15:51.079148       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:15:51.079216       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-297661"
	I1101 10:15:51.079282       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:15:51.080557       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:15:51.080587       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:15:51.080639       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:15:51.080653       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:15:51.080952       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:15:51.080955       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:15:51.083138       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:15:51.091356       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:15:51.101755       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:16:06.082157       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ad61b10f8e140aeb0af6fd55e782e028e92c86d23d31f34a996fe6bee23d45e7] <==
	I1101 10:15:53.609484       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:15:53.677206       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:15:53.777882       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:15:53.777951       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:15:53.778044       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:15:53.797661       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:15:53.797717       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:15:53.803250       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:15:53.803697       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:15:53.803716       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:15:53.805336       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:15:53.805362       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:15:53.805398       1 config.go:200] "Starting service config controller"
	I1101 10:15:53.805421       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:15:53.805417       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:15:53.805441       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:15:53.805482       1 config.go:309] "Starting node config controller"
	I1101 10:15:53.805487       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:15:53.805502       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:15:53.905556       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:15:53.905595       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:15:53.905570       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [24e09344febf421139bbbdae8d663120c3c223b397b6fa22e35806255e5a549b] <==
	E1101 10:15:40.505386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:15:40.554779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:15:40.626354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:15:40.677239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:15:41.924339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:15:41.962826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:15:42.068516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:15:42.146457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:15:42.204125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:15:42.220502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:15:42.232867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:15:42.271254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:15:42.387031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:15:42.426440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:15:42.446734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:15:42.446906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:15:42.488619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:15:42.921494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:15:43.065008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:15:43.176236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:15:43.184917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:15:43.287719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 10:15:43.656980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:15:45.238774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1101 10:15:47.788415       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:15:47 pause-297661 kubelet[1340]: I1101 10:15:47.724388    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-297661" podStartSLOduration=3.72436216 podStartE2EDuration="3.72436216s" podCreationTimestamp="2025-11-01 10:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:15:47.710667469 +0000 UTC m=+1.127476042" watchObservedRunningTime="2025-11-01 10:15:47.72436216 +0000 UTC m=+1.141170733"
	Nov 01 10:15:51 pause-297661 kubelet[1340]: I1101 10:15:51.111758    1340 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:15:51 pause-297661 kubelet[1340]: I1101 10:15:51.112628    1340 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.315973    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vldrr\" (UniqueName: \"kubernetes.io/projected/4c409377-301d-463a-8a0e-beb0afb959c7-kube-api-access-vldrr\") pod \"kube-proxy-5mqgt\" (UID: \"4c409377-301d-463a-8a0e-beb0afb959c7\") " pod="kube-system/kube-proxy-5mqgt"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316020    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/263025a4-2ce5-48bc-805a-20a2a35bb5f2-lib-modules\") pod \"kindnet-vlk6r\" (UID: \"263025a4-2ce5-48bc-805a-20a2a35bb5f2\") " pod="kube-system/kindnet-vlk6r"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316040    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c409377-301d-463a-8a0e-beb0afb959c7-xtables-lock\") pod \"kube-proxy-5mqgt\" (UID: \"4c409377-301d-463a-8a0e-beb0afb959c7\") " pod="kube-system/kube-proxy-5mqgt"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316057    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c409377-301d-463a-8a0e-beb0afb959c7-kube-proxy\") pod \"kube-proxy-5mqgt\" (UID: \"4c409377-301d-463a-8a0e-beb0afb959c7\") " pod="kube-system/kube-proxy-5mqgt"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316159    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/263025a4-2ce5-48bc-805a-20a2a35bb5f2-cni-cfg\") pod \"kindnet-vlk6r\" (UID: \"263025a4-2ce5-48bc-805a-20a2a35bb5f2\") " pod="kube-system/kindnet-vlk6r"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316188    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/263025a4-2ce5-48bc-805a-20a2a35bb5f2-xtables-lock\") pod \"kindnet-vlk6r\" (UID: \"263025a4-2ce5-48bc-805a-20a2a35bb5f2\") " pod="kube-system/kindnet-vlk6r"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316218    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c409377-301d-463a-8a0e-beb0afb959c7-lib-modules\") pod \"kube-proxy-5mqgt\" (UID: \"4c409377-301d-463a-8a0e-beb0afb959c7\") " pod="kube-system/kube-proxy-5mqgt"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.316264    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk6sz\" (UniqueName: \"kubernetes.io/projected/263025a4-2ce5-48bc-805a-20a2a35bb5f2-kube-api-access-mk6sz\") pod \"kindnet-vlk6r\" (UID: \"263025a4-2ce5-48bc-805a-20a2a35bb5f2\") " pod="kube-system/kindnet-vlk6r"
	Nov 01 10:15:53 pause-297661 kubelet[1340]: I1101 10:15:53.739128    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5mqgt" podStartSLOduration=1.739104516 podStartE2EDuration="1.739104516s" podCreationTimestamp="2025-11-01 10:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:15:53.728453918 +0000 UTC m=+7.145262491" watchObservedRunningTime="2025-11-01 10:15:53.739104516 +0000 UTC m=+7.155913090"
	Nov 01 10:15:56 pause-297661 kubelet[1340]: I1101 10:15:56.830425    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vlk6r" podStartSLOduration=4.830399347 podStartE2EDuration="4.830399347s" podCreationTimestamp="2025-11-01 10:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:15:53.739090826 +0000 UTC m=+7.155899422" watchObservedRunningTime="2025-11-01 10:15:56.830399347 +0000 UTC m=+10.247207930"
	Nov 01 10:16:04 pause-297661 kubelet[1340]: I1101 10:16:04.441928    1340 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:16:04 pause-297661 kubelet[1340]: I1101 10:16:04.899856    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pctk\" (UniqueName: \"kubernetes.io/projected/1680b086-3fa8-4b80-9705-650dcd1f0da2-kube-api-access-4pctk\") pod \"coredns-66bc5c9577-sdhft\" (UID: \"1680b086-3fa8-4b80-9705-650dcd1f0da2\") " pod="kube-system/coredns-66bc5c9577-sdhft"
	Nov 01 10:16:04 pause-297661 kubelet[1340]: I1101 10:16:04.899923    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1680b086-3fa8-4b80-9705-650dcd1f0da2-config-volume\") pod \"coredns-66bc5c9577-sdhft\" (UID: \"1680b086-3fa8-4b80-9705-650dcd1f0da2\") " pod="kube-system/coredns-66bc5c9577-sdhft"
	Nov 01 10:16:05 pause-297661 kubelet[1340]: I1101 10:16:05.784447    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sdhft" podStartSLOduration=13.784416618 podStartE2EDuration="13.784416618s" podCreationTimestamp="2025-11-01 10:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:16:05.769160084 +0000 UTC m=+19.185968675" watchObservedRunningTime="2025-11-01 10:16:05.784416618 +0000 UTC m=+19.201225192"
	Nov 01 10:16:10 pause-297661 kubelet[1340]: W1101 10:16:10.764142    1340 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 01 10:16:10 pause-297661 kubelet[1340]: E1101 10:16:10.764256    1340 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 01 10:16:10 pause-297661 kubelet[1340]: E1101 10:16:10.764320    1340 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 10:16:10 pause-297661 kubelet[1340]: E1101 10:16:10.764332    1340 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 10:16:14 pause-297661 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:16:14 pause-297661 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:16:14 pause-297661 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:16:14 pause-297661 systemd[1]: kubelet.service: Consumed 1.348s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-297661 -n pause-297661
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-297661 -n pause-297661: exit status 2 (444.731819ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-297661 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-556573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-556573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (263.195024ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:18:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-556573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-556573 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-556573 describe deploy/metrics-server -n kube-system: exit status 1 (65.473148ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-556573 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-556573
helpers_test.go:243: (dbg) docker inspect old-k8s-version-556573:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e",
	        "Created": "2025-11-01T10:17:54.292571852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 740394,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:17:54.325766927Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e/hostname",
	        "HostsPath": "/var/lib/docker/containers/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e/hosts",
	        "LogPath": "/var/lib/docker/containers/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e-json.log",
	        "Name": "/old-k8s-version-556573",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-556573:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-556573",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e",
	                "LowerDir": "/var/lib/docker/overlay2/4facf36bf2fbf14ccb684b9dadf34edcc1aafb1047e6fddc098a6134e0e1cc98-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4facf36bf2fbf14ccb684b9dadf34edcc1aafb1047e6fddc098a6134e0e1cc98/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4facf36bf2fbf14ccb684b9dadf34edcc1aafb1047e6fddc098a6134e0e1cc98/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4facf36bf2fbf14ccb684b9dadf34edcc1aafb1047e6fddc098a6134e0e1cc98/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-556573",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-556573/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-556573",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-556573",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-556573",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0ffcdd6c76f14bc38d06481c48bf28bc92c3598ee2d455dfd99881c4882a195e",
	            "SandboxKey": "/var/run/docker/netns/0ffcdd6c76f1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-556573": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:52:fd:5f:fc:30",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bbcdd55cf2cbe101dd2954fd5b3da9010f13fa5cf479e04754b13ce474d6499d",
	                    "EndpointID": "5f1da8db41c00f209c5cc0cceea2051d722c5d8317bbfa520b329bd1f2228185",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-556573",
	                        "fa365e4464f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556573 -n old-k8s-version-556573
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-556573 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-556573 logs -n 25: (1.048999761s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-456743 sudo containerd config dump                                                                                                                                                                                                  │ cilium-456743             │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p cilium-456743 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-456743             │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p cilium-456743 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-456743             │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p cilium-456743 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-456743             │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p cilium-456743 sudo crio config                                                                                                                                                                                                             │ cilium-456743             │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ delete  │ -p cilium-456743                                                                                                                                                                                                                              │ cilium-456743             │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-949166 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ ssh     │ cert-options-278823 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-278823       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ ssh     │ -p cert-options-278823 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-278823       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ delete  │ -p cert-options-278823                                                                                                                                                                                                                        │ cert-options-278823       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p force-systemd-flag-767379 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ delete  │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p NoKubernetes-194729 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ stop    │ -p kubernetes-upgrade-949166                                                                                                                                                                                                                  │ kubernetes-upgrade-949166 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-949166 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p NoKubernetes-194729 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ stop    │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p NoKubernetes-194729 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ ssh     │ -p NoKubernetes-194729 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ delete  │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:18 UTC │
	│ ssh     │ force-systemd-flag-767379 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ delete  │ -p force-systemd-flag-767379                                                                                                                                                                                                                  │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-556573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:17:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:17:54.329680  740314 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:17:54.329810  740314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:17:54.329819  740314 out.go:374] Setting ErrFile to fd 2...
	I1101 10:17:54.329823  740314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:17:54.330082  740314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:17:54.330569  740314 out.go:368] Setting JSON to false
	I1101 10:17:54.332514  740314 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10811,"bootTime":1761981463,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:17:54.332630  740314 start.go:143] virtualization: kvm guest
	I1101 10:17:54.334427  740314 out.go:179] * [no-preload-680879] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:17:54.335421  740314 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:17:54.335471  740314 notify.go:221] Checking for updates...
	I1101 10:17:54.337178  740314 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:17:54.341595  740314 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:17:54.342594  740314 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:17:54.343504  740314 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:17:54.344372  740314 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:17:54.345806  740314 config.go:182] Loaded profile config "cert-expiration-577441": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:17:54.345947  740314 config.go:182] Loaded profile config "kubernetes-upgrade-949166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:17:54.346056  740314 config.go:182] Loaded profile config "old-k8s-version-556573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:17:54.346150  740314 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:17:54.371822  740314 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:17:54.371998  740314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:17:54.442239  740314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:17:54.431754685 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:17:54.442348  740314 docker.go:319] overlay module found
	I1101 10:17:54.443746  740314 out.go:179] * Using the docker driver based on user configuration
	I1101 10:17:54.444666  740314 start.go:309] selected driver: docker
	I1101 10:17:54.444683  740314 start.go:930] validating driver "docker" against <nil>
	I1101 10:17:54.444698  740314 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:17:54.445597  740314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:17:54.510488  740314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-01 10:17:54.499507758 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:17:54.510818  740314 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:17:54.511105  740314 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:17:54.512703  740314 out.go:179] * Using Docker driver with root privileges
	I1101 10:17:54.513691  740314 cni.go:84] Creating CNI manager for ""
	I1101 10:17:54.513784  740314 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:17:54.513800  740314 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:17:54.513888  740314 start.go:353] cluster config:
	{Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:17:54.516003  740314 out.go:179] * Starting "no-preload-680879" primary control-plane node in "no-preload-680879" cluster
	I1101 10:17:54.519217  740314 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:17:54.520287  740314 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:17:54.521185  740314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:17:54.521273  740314 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:17:54.521323  740314 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/config.json ...
	I1101 10:17:54.521368  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/config.json: {Name:mkda05d903eb5a2c45b9b0342753da0683264af7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:54.521500  740314 cache.go:107] acquiring lock: {Name:mk54c640473c09dfff1239ead2dd2d51481a015a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521544  740314 cache.go:107] acquiring lock: {Name:mkf19fdae2c3486652a390b24771bb4742a08787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521607  740314 cache.go:107] acquiring lock: {Name:mke846f8ed0eae3f666a2c55755500ad865ceb9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521625  740314 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:54.521622  740314 cache.go:107] acquiring lock: {Name:mke53a0d558f57413c985e8c7d551691237ca10b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521685  740314 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:54.521720  740314 cache.go:107] acquiring lock: {Name:mka96111944f8dc8ebfdcd94de79dafd069ca1d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521759  740314 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:54.521735  740314 cache.go:107] acquiring lock: {Name:mkcd303cc659630879e706aba8fe46f709be28e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521737  740314 cache.go:107] acquiring lock: {Name:mk1c05d679d90243f04dc9223673738f53287a15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521789  740314 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:54.521798  740314 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:54.521497  740314 cache.go:107] acquiring lock: {Name:mke74377eb8e8f0a2186d46bf4bdde02a944c052 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.522016  740314 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1101 10:17:54.522041  740314 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 10:17:54.522053  740314 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 576.984µs
	I1101 10:17:54.522064  740314 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 10:17:54.522126  740314 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:54.523285  740314 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:54.523384  740314 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:54.523290  740314 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1101 10:17:54.523291  740314 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:54.523292  740314 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:54.523499  740314 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:54.523434  740314 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:54.545323  740314 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:17:54.545353  740314 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:17:54.545369  740314 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:17:54.545411  740314 start.go:360] acquireMachinesLock for no-preload-680879: {Name:mkb2bd3a5c4fc957e021ade32b7982a68330a2bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.545543  740314 start.go:364] duration metric: took 106.867µs to acquireMachinesLock for "no-preload-680879"
	I1101 10:17:54.545576  740314 start.go:93] Provisioning new machine with config: &{Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:17:54.545676  740314 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:17:54.013639  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:17:54.013707  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:17:54.208676  738963 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-556573:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.490403699s)
	I1101 10:17:54.208718  738963 kic.go:203] duration metric: took 4.490574402s to extract preloaded images to volume ...
	W1101 10:17:54.208871  738963 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 10:17:54.208914  738963 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 10:17:54.208967  738963 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:17:54.273343  738963 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-556573 --name old-k8s-version-556573 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-556573 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-556573 --network old-k8s-version-556573 --ip 192.168.94.2 --volume old-k8s-version-556573:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:17:54.580571  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Running}}
	I1101 10:17:54.601970  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:17:54.625681  738963 cli_runner.go:164] Run: docker exec old-k8s-version-556573 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:17:54.676929  738963 oci.go:144] the created container "old-k8s-version-556573" has a running status.
	I1101 10:17:54.676987  738963 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa...
	I1101 10:17:55.057809  738963 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:17:55.095198  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:17:55.116623  738963 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:17:55.116650  738963 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-556573 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:17:55.165567  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:17:55.187143  738963 machine.go:94] provisionDockerMachine start ...
	I1101 10:17:55.187250  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:55.208309  738963 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:55.208652  738963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1101 10:17:55.208667  738963 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:17:55.370206  738963 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556573
	
	I1101 10:17:55.370240  738963 ubuntu.go:182] provisioning hostname "old-k8s-version-556573"
	I1101 10:17:55.370331  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:55.396282  738963 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:55.396830  738963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1101 10:17:55.396877  738963 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-556573 && echo "old-k8s-version-556573" | sudo tee /etc/hostname
	I1101 10:17:55.563124  738963 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556573
	
	I1101 10:17:55.563208  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:55.584571  738963 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:55.584864  738963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1101 10:17:55.584891  738963 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-556573' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-556573/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-556573' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:17:55.736331  738963 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:17:55.736363  738963 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:17:55.736390  738963 ubuntu.go:190] setting up certificates
	I1101 10:17:55.736405  738963 provision.go:84] configureAuth start
	I1101 10:17:55.736468  738963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-556573
	I1101 10:17:55.756180  738963 provision.go:143] copyHostCerts
	I1101 10:17:55.756257  738963 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:17:55.756274  738963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:17:55.756382  738963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:17:55.756517  738963 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:17:55.756532  738963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:17:55.756572  738963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:17:55.756657  738963 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:17:55.756669  738963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:17:55.756719  738963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:17:55.756796  738963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-556573 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-556573]
	I1101 10:17:56.126009  738963 provision.go:177] copyRemoteCerts
	I1101 10:17:56.126086  738963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:17:56.126148  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:56.152270  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:17:56.269687  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:17:56.310682  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 10:17:56.337656  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:17:56.361422  738963 provision.go:87] duration metric: took 624.997549ms to configureAuth
	I1101 10:17:56.361463  738963 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:17:56.361672  738963 config.go:182] Loaded profile config "old-k8s-version-556573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:17:56.361790  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:56.385162  738963 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:56.385532  738963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1101 10:17:56.385561  738963 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:17:56.688357  738963 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:17:56.688383  738963 machine.go:97] duration metric: took 1.501214294s to provisionDockerMachine
	I1101 10:17:56.688395  738963 client.go:176] duration metric: took 7.678945711s to LocalClient.Create
	I1101 10:17:56.688410  738963 start.go:167] duration metric: took 7.679147879s to libmachine.API.Create "old-k8s-version-556573"
	I1101 10:17:56.688425  738963 start.go:293] postStartSetup for "old-k8s-version-556573" (driver="docker")
	I1101 10:17:56.688435  738963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:17:56.688499  738963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:17:56.688538  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:56.707058  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:17:56.811712  738963 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:17:56.818016  738963 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:17:56.818046  738963 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:17:56.818058  738963 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:17:56.818112  738963 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:17:56.818193  738963 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:17:56.818294  738963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:17:56.827020  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:17:56.850579  738963 start.go:296] duration metric: took 162.137964ms for postStartSetup
	I1101 10:17:56.850976  738963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-556573
	I1101 10:17:56.873161  738963 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/config.json ...
	I1101 10:17:56.873459  738963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:17:56.873516  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:56.891802  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:17:56.991369  738963 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:17:56.996386  738963 start.go:128] duration metric: took 7.990475464s to createHost
	I1101 10:17:56.996416  738963 start.go:83] releasing machines lock for "old-k8s-version-556573", held for 7.99063659s
	I1101 10:17:56.996498  738963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-556573
	I1101 10:17:57.015266  738963 ssh_runner.go:195] Run: cat /version.json
	I1101 10:17:57.015332  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:57.015397  738963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:17:57.015477  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:57.034043  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:17:57.034509  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:17:57.187648  738963 ssh_runner.go:195] Run: systemctl --version
	I1101 10:17:57.195510  738963 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:17:57.234782  738963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:17:57.239705  738963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:17:57.239772  738963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:17:57.267147  738963 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:17:57.267176  738963 start.go:496] detecting cgroup driver to use...
	I1101 10:17:57.267220  738963 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:17:57.267280  738963 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:17:57.285222  738963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:17:57.298477  738963 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:17:57.298534  738963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:17:57.317234  738963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:17:57.336745  738963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:17:57.421539  738963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:17:57.515217  738963 docker.go:234] disabling docker service ...
	I1101 10:17:57.515296  738963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:17:57.534882  738963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:17:57.548727  738963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:17:57.636169  738963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:17:57.726612  738963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:17:57.740232  738963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:17:57.755975  738963 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 10:17:57.756033  738963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.767047  738963 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:17:57.767122  738963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.777195  738963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.787417  738963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.797339  738963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:17:57.807067  738963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.816832  738963 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.831783  738963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.841791  738963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:17:57.850716  738963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:17:57.859911  738963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:17:57.947816  738963 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:17:58.256302  738963 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:17:58.256372  738963 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:17:58.261072  738963 start.go:564] Will wait 60s for crictl version
	I1101 10:17:58.261134  738963 ssh_runner.go:195] Run: which crictl
	I1101 10:17:58.264803  738963 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:17:58.292615  738963 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:17:58.292694  738963 ssh_runner.go:195] Run: crio --version
	I1101 10:17:58.324924  738963 ssh_runner.go:195] Run: crio --version
	I1101 10:17:58.357678  738963 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 10:17:58.358745  738963 cli_runner.go:164] Run: docker network inspect old-k8s-version-556573 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:17:58.377453  738963 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 10:17:58.382358  738963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:17:58.396531  738963 kubeadm.go:884] updating cluster {Name:old-k8s-version-556573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-556573 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:17:58.396716  738963 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:17:58.396787  738963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:17:58.435580  738963 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:17:58.435605  738963 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:17:58.435649  738963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:17:58.464855  738963 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:17:58.464883  738963 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:17:58.464893  738963 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1101 10:17:58.464997  738963 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-556573 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-556573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:17:58.465081  738963 ssh_runner.go:195] Run: crio config
	I1101 10:17:58.520068  738963 cni.go:84] Creating CNI manager for ""
	I1101 10:17:58.520093  738963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:17:58.520111  738963 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:17:58.520135  738963 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-556573 NodeName:old-k8s-version-556573 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:17:58.520324  738963 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-556573"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:17:58.520383  738963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 10:17:58.529452  738963 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:17:58.529530  738963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:17:58.538346  738963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 10:17:58.552569  738963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:17:58.569689  738963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1101 10:17:58.584988  738963 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:17:58.588925  738963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:17:58.600152  738963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:17:58.688530  738963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:17:58.711925  738963 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573 for IP: 192.168.94.2
	I1101 10:17:58.711957  738963 certs.go:195] generating shared ca certs ...
	I1101 10:17:58.711989  738963 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:58.712161  738963 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:17:58.712217  738963 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:17:58.712230  738963 certs.go:257] generating profile certs ...
	I1101 10:17:58.712299  738963 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.key
	I1101 10:17:58.712316  738963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt with IP's: []
	I1101 10:17:54.548181  740314 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:17:54.548448  740314 start.go:159] libmachine.API.Create for "no-preload-680879" (driver="docker")
	I1101 10:17:54.548503  740314 client.go:173] LocalClient.Create starting
	I1101 10:17:54.548566  740314 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem
	I1101 10:17:54.548613  740314 main.go:143] libmachine: Decoding PEM data...
	I1101 10:17:54.548646  740314 main.go:143] libmachine: Parsing certificate...
	I1101 10:17:54.548730  740314 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem
	I1101 10:17:54.548757  740314 main.go:143] libmachine: Decoding PEM data...
	I1101 10:17:54.548785  740314 main.go:143] libmachine: Parsing certificate...
	I1101 10:17:54.549266  740314 cli_runner.go:164] Run: docker network inspect no-preload-680879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:17:54.567956  740314 cli_runner.go:211] docker network inspect no-preload-680879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:17:54.568065  740314 network_create.go:284] running [docker network inspect no-preload-680879] to gather additional debugging logs...
	I1101 10:17:54.568083  740314 cli_runner.go:164] Run: docker network inspect no-preload-680879
	W1101 10:17:54.587569  740314 cli_runner.go:211] docker network inspect no-preload-680879 returned with exit code 1
	I1101 10:17:54.587597  740314 network_create.go:287] error running [docker network inspect no-preload-680879]: docker network inspect no-preload-680879: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-680879 not found
	I1101 10:17:54.587611  740314 network_create.go:289] output of [docker network inspect no-preload-680879]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-680879 not found
	
	** /stderr **
	I1101 10:17:54.587730  740314 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:17:54.608251  740314 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-db3052bfa0e7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:6a:af:78:80:46} reservation:<nil>}
	I1101 10:17:54.609244  740314 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-99d2741e1e59 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:99:ce:91:38:1c} reservation:<nil>}
	I1101 10:17:54.610099  740314 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a696a61d1319 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:f0:66:2c:aa:f2} reservation:<nil>}
	I1101 10:17:54.610614  740314 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d8ebd2dfecb8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1e:d8:5a:bb:d5:46} reservation:<nil>}
	I1101 10:17:54.611489  740314 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00244b380}
	I1101 10:17:54.611524  740314 network_create.go:124] attempt to create docker network no-preload-680879 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 10:17:54.611578  740314 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-680879 no-preload-680879
	I1101 10:17:54.680988  740314 network_create.go:108] docker network no-preload-680879 192.168.85.0/24 created
	I1101 10:17:54.681029  740314 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-680879" container
	I1101 10:17:54.681103  740314 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:17:54.700896  740314 cli_runner.go:164] Run: docker volume create no-preload-680879 --label name.minikube.sigs.k8s.io=no-preload-680879 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:17:54.702737  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1101 10:17:54.722708  740314 oci.go:103] Successfully created a docker volume no-preload-680879
	I1101 10:17:54.722816  740314 cli_runner.go:164] Run: docker run --rm --name no-preload-680879-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-680879 --entrypoint /usr/bin/test -v no-preload-680879:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:17:54.722882  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1101 10:17:54.728568  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1101 10:17:54.746598  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1101 10:17:54.770083  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1101 10:17:54.837135  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1101 10:17:54.853925  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1101 10:17:54.853956  740314 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 332.276568ms
	I1101 10:17:54.853975  740314 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1101 10:17:54.854387  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1101 10:17:55.193234  740314 oci.go:107] Successfully prepared a docker volume no-preload-680879
	I1101 10:17:55.193274  740314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1101 10:17:55.193371  740314 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 10:17:55.193398  740314 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 10:17:55.193455  740314 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:17:55.261204  740314 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-680879 --name no-preload-680879 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-680879 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-680879 --network no-preload-680879 --ip 192.168.85.2 --volume no-preload-680879:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:17:55.433717  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1101 10:17:55.433746  740314 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 912.265964ms
	I1101 10:17:55.433761  740314 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1101 10:17:55.571376  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Running}}
	I1101 10:17:55.592546  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:17:55.612792  740314 cli_runner.go:164] Run: docker exec no-preload-680879 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:17:55.663201  740314 oci.go:144] the created container "no-preload-680879" has a running status.
	I1101 10:17:55.663244  740314 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa...
	I1101 10:17:56.339008  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1101 10:17:56.339041  740314 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.817357447s
	I1101 10:17:56.339064  740314 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1101 10:17:56.353273  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1101 10:17:56.353299  740314 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.831581009s
	I1101 10:17:56.353313  740314 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1101 10:17:56.455888  740314 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:17:56.487344  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:17:56.512170  740314 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:17:56.512194  740314 kic_runner.go:114] Args: [docker exec --privileged no-preload-680879 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:17:56.527585  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1101 10:17:56.527618  740314 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 2.006015881s
	I1101 10:17:56.527633  740314 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1101 10:17:56.566801  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:17:56.588668  740314 machine.go:94] provisionDockerMachine start ...
	I1101 10:17:56.588764  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:56.610299  740314 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:56.610586  740314 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1101 10:17:56.610601  740314 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:17:56.620626  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1101 10:17:56.620660  740314 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 2.099116432s
	I1101 10:17:56.620679  740314 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1101 10:17:56.759921  740314 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-680879
	
	I1101 10:17:56.759958  740314 ubuntu.go:182] provisioning hostname "no-preload-680879"
	I1101 10:17:56.760025  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:56.781024  740314 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:56.781597  740314 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1101 10:17:56.781625  740314 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-680879 && echo "no-preload-680879" | sudo tee /etc/hostname
	I1101 10:17:56.871686  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1101 10:17:56.871718  740314 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.350192312s
	I1101 10:17:56.871734  740314 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1101 10:17:56.871757  740314 cache.go:87] Successfully saved all images to host disk.
	I1101 10:17:56.939360  740314 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-680879
	
	I1101 10:17:56.939455  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:56.957720  740314 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:56.957974  740314 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1101 10:17:56.957993  740314 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-680879' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-680879/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-680879' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:17:57.101866  740314 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:17:57.101908  740314 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:17:57.101930  740314 ubuntu.go:190] setting up certificates
	I1101 10:17:57.101943  740314 provision.go:84] configureAuth start
	I1101 10:17:57.102011  740314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-680879
	I1101 10:17:57.119619  740314 provision.go:143] copyHostCerts
	I1101 10:17:57.119682  740314 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:17:57.119692  740314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:17:57.119759  740314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:17:57.119894  740314 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:17:57.119904  740314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:17:57.119936  740314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:17:57.120058  740314 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:17:57.120070  740314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:17:57.120096  740314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:17:57.120152  740314 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.no-preload-680879 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-680879]
	I1101 10:17:57.191661  740314 provision.go:177] copyRemoteCerts
	I1101 10:17:57.191731  740314 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:17:57.191794  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:57.210790  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:17:57.315284  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:17:57.336315  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:17:57.355800  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:17:57.379678  740314 provision.go:87] duration metric: took 277.720039ms to configureAuth
	I1101 10:17:57.379711  740314 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:17:57.379936  740314 config.go:182] Loaded profile config "no-preload-680879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:17:57.380129  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:57.399271  740314 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:57.399495  740314 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1101 10:17:57.399513  740314 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:17:57.672306  740314 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:17:57.672343  740314 machine.go:97] duration metric: took 1.083651161s to provisionDockerMachine
	I1101 10:17:57.672358  740314 client.go:176] duration metric: took 3.123842795s to LocalClient.Create
	I1101 10:17:57.672375  740314 start.go:167] duration metric: took 3.123928426s to libmachine.API.Create "no-preload-680879"
	I1101 10:17:57.672386  740314 start.go:293] postStartSetup for "no-preload-680879" (driver="docker")
	I1101 10:17:57.672407  740314 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:17:57.672475  740314 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:17:57.672524  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:57.693139  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:17:57.799034  740314 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:17:57.802797  740314 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:17:57.802832  740314 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:17:57.802860  740314 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:17:57.802922  740314 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:17:57.803020  740314 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:17:57.803151  740314 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:17:57.812099  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:17:57.833816  740314 start.go:296] duration metric: took 161.404788ms for postStartSetup
	I1101 10:17:57.834255  740314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-680879
	I1101 10:17:57.853437  740314 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/config.json ...
	I1101 10:17:57.853724  740314 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:17:57.853780  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:57.874008  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:17:57.976717  740314 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:17:57.982245  740314 start.go:128] duration metric: took 3.436549965s to createHost
	I1101 10:17:57.982282  740314 start.go:83] releasing machines lock for "no-preload-680879", held for 3.436721676s
	I1101 10:17:57.982356  740314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-680879
	I1101 10:17:58.000977  740314 ssh_runner.go:195] Run: cat /version.json
	I1101 10:17:58.001068  740314 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:17:58.001089  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:58.001139  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:58.020529  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:17:58.020758  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:17:58.205927  740314 ssh_runner.go:195] Run: systemctl --version
	I1101 10:17:58.213192  740314 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:17:58.253117  740314 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:17:58.258886  740314 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:17:58.258962  740314 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:17:58.288803  740314 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:17:58.288830  740314 start.go:496] detecting cgroup driver to use...
	I1101 10:17:58.288893  740314 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:17:58.288941  740314 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:17:58.307356  740314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:17:58.322675  740314 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:17:58.322736  740314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:17:58.341157  740314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:17:58.360947  740314 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:17:58.456227  740314 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:17:58.561057  740314 docker.go:234] disabling docker service ...
	I1101 10:17:58.561131  740314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:17:58.582658  740314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:17:58.597232  740314 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:17:58.695614  740314 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:17:58.793168  740314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:17:58.807256  740314 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:17:58.823260  740314 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:17:58.823330  740314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.834779  740314 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:17:58.834884  740314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.845319  740314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.855201  740314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.864874  740314 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:17:58.873856  740314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.883617  740314 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.899043  740314 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.908700  740314 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:17:58.917085  740314 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:17:58.925384  740314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:17:59.008235  740314 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:17:59.118729  740314 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:17:59.118806  740314 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:17:59.123077  740314 start.go:564] Will wait 60s for crictl version
	I1101 10:17:59.123150  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.127128  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:17:59.155569  740314 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:17:59.155656  740314 ssh_runner.go:195] Run: crio --version
	I1101 10:17:59.186953  740314 ssh_runner.go:195] Run: crio --version
	I1101 10:17:59.219966  740314 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:17:59.221021  740314 cli_runner.go:164] Run: docker network inspect no-preload-680879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:17:59.239482  740314 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:17:59.244202  740314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:17:59.255809  740314 kubeadm.go:884] updating cluster {Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:17:59.255946  740314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:17:59.255980  740314 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:17:59.284438  740314 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 10:17:59.284468  740314 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 10:17:59.284523  740314 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:17:59.284528  740314 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.284563  740314 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1101 10:17:59.284605  740314 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.284618  740314 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.284623  740314 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.284646  740314 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.284603  740314 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.286000  740314 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.286051  740314 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.286080  740314 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.286006  740314 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:17:59.286113  740314 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1101 10:17:59.286129  740314 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.286137  740314 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.286007  740314 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.015929  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:17:59.015970  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:17:59.023848  738963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt ...
	I1101 10:17:59.023883  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: {Name:mk60f4f77d4ab12ba9513b9be0f8dc061ffb192a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:59.024070  738963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.key ...
	I1101 10:17:59.024088  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.key: {Name:mka0dfbc519768f58fceb8fac999651371c9277a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:59.024213  738963 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key.91d3229f
	I1101 10:17:59.024235  738963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt.91d3229f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1101 10:17:59.350123  738963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt.91d3229f ...
	I1101 10:17:59.350152  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt.91d3229f: {Name:mke720aa52c5354bd5eabee42f543e759ac9c73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:59.350361  738963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key.91d3229f ...
	I1101 10:17:59.350383  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key.91d3229f: {Name:mk0fa0fd43be446018f9e7889bd59f3ff8f7bc1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:59.350501  738963 certs.go:382] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt.91d3229f -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt
	I1101 10:17:59.350586  738963 certs.go:386] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key.91d3229f -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key
	I1101 10:17:59.350641  738963 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.key
	I1101 10:17:59.350657  738963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.crt with IP's: []
	I1101 10:17:59.534613  738963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.crt ...
	I1101 10:17:59.534657  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.crt: {Name:mkdfa4ecfaa9cdd60452e28a809d1069cb4a4e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:59.534923  738963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.key ...
	I1101 10:17:59.534997  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.key: {Name:mk356ae409e016efeaed9ce8e67efa99bdf488f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:59.535274  738963 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:17:59.535317  738963 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:17:59.535330  738963 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:17:59.535358  738963 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:17:59.535382  738963 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:17:59.535408  738963 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:17:59.535458  738963 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:17:59.536448  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:17:59.566815  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:17:59.592988  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:17:59.620788  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:17:59.648105  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:17:59.679872  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:17:59.699674  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:17:59.720891  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:17:59.740573  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:17:59.762751  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:17:59.782484  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:17:59.802771  738963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:17:59.820170  738963 ssh_runner.go:195] Run: openssl version
	I1101 10:17:59.829227  738963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:17:59.840352  738963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:17:59.845423  738963 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:17:59.845512  738963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:17:59.886176  738963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:17:59.898071  738963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:17:59.909228  738963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:17:59.915108  738963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:17:59.915181  738963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:17:59.953104  738963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:17:59.963399  738963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:17:59.974929  738963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:17:59.980231  738963 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:17:59.980305  738963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:18:00.020858  738963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:18:00.033137  738963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:18:00.039112  738963 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:18:00.039182  738963 kubeadm.go:401] StartCluster: {Name:old-k8s-version-556573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-556573 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:18:00.039287  738963 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:18:00.039353  738963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:18:00.083742  738963 cri.go:89] found id: ""
	I1101 10:18:00.083825  738963 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:18:00.106819  738963 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:18:00.117922  738963 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:18:00.117987  738963 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:18:00.129125  738963 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:18:00.129155  738963 kubeadm.go:158] found existing configuration files:
	
	I1101 10:18:00.129216  738963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:18:00.138980  738963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:18:00.139046  738963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:18:00.149281  738963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:18:00.159942  738963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:18:00.160011  738963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:18:00.169889  738963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:18:00.180613  738963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:18:00.180700  738963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:18:00.191183  738963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:18:00.203862  738963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:18:00.203940  738963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:18:00.215749  738963 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:18:00.342016  738963 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 10:18:00.449936  738963 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:17:59.444649  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.447737  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.449432  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.453874  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1101 10:17:59.494793  740314 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1101 10:17:59.494869  740314 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.494930  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.496996  740314 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1101 10:17:59.497037  740314 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.497037  740314 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1101 10:17:59.497071  740314 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.497084  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.497121  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.500444  740314 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1101 10:17:59.500496  740314 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1101 10:17:59.500536  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.500543  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.502918  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.502924  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.506493  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.509637  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.540787  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.540935  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:17:59.544140  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.544238  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.549134  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.558751  740314 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1101 10:17:59.558804  740314 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.559078  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.566504  740314 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1101 10:17:59.566559  740314 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.566613  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.582355  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.582412  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:17:59.582464  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.582490  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.601305  740314 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1101 10:17:59.601346  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.601354  740314 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.601423  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.601443  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.621556  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1101 10:17:59.621614  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1101 10:17:59.621653  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:17:59.621663  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1101 10:17:59.621704  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:17:59.621713  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:17:59.621729  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:17:59.639291  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.639344  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.639377  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.639382  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1101 10:17:59.639398  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1101 10:17:59.639416  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1101 10:17:59.639410  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1101 10:17:59.677939  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1101 10:17:59.678011  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1101 10:17:59.678054  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1101 10:17:59.678172  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1101 10:17:59.695513  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.700932  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.700965  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.860502  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.860551  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1101 10:17:59.860585  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1101 10:17:59.860592  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1101 10:17:59.860691  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:17:59.861074  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1101 10:17:59.861164  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:17:59.920136  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1101 10:17:59.920150  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1101 10:17:59.920201  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1101 10:17:59.920284  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1101 10:17:59.920315  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:17:59.920319  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1101 10:17:59.950783  740314 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1101 10:17:59.950860  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1101 10:17:59.991758  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1101 10:17:59.991816  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1101 10:18:00.189619  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1101 10:18:00.323537  740314 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:18:00.323616  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:18:00.590359  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:01.700116  740314 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.376469192s)
	I1101 10:18:01.700156  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1101 10:18:01.700174  740314 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.109776162s)
	I1101 10:18:01.700189  740314 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:18:01.700226  740314 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1101 10:18:01.700254  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:18:01.700262  740314 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:01.700302  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:18:02.868720  740314 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.168433694s)
	I1101 10:18:02.868737  740314 ssh_runner.go:235] Completed: which crictl: (1.16841485s)
	I1101 10:18:02.868757  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1101 10:18:02.868792  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:02.868792  740314 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:18:02.868860  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:18:04.065722  740314 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.196829212s)
	I1101 10:18:04.065758  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1101 10:18:04.065768  740314 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.196948933s)
	I1101 10:18:04.065798  740314 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:18:04.065858  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:18:04.065910  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:04.017260  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:04.017338  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:05.432984  740314 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.367095482s)
	I1101 10:18:05.433013  740314 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.367065339s)
	I1101 10:18:05.433088  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:05.433020  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1101 10:18:05.433181  740314 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:18:05.433235  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:18:05.466257  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 10:18:05.466366  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:18:06.612033  740314 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.178768896s)
	I1101 10:18:06.612063  740314 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.145678163s)
	I1101 10:18:06.612067  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1101 10:18:06.612088  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1101 10:18:06.612101  740314 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:18:06.612114  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1101 10:18:06.612163  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:18:09.020044  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:09.020094  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:09.141677  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:48880->192.168.103.2:8443: read: connection reset by peer
	I1101 10:18:09.512108  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:09.512610  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:10.562506  738963 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1101 10:18:10.562620  738963 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:18:10.562755  738963 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:18:10.562868  738963 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 10:18:10.562943  738963 kubeadm.go:319] OS: Linux
	I1101 10:18:10.563025  738963 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:18:10.563110  738963 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:18:10.563190  738963 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:18:10.563269  738963 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:18:10.563357  738963 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:18:10.563434  738963 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:18:10.563512  738963 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:18:10.563595  738963 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 10:18:10.563704  738963 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:18:10.563874  738963 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:18:10.564015  738963 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 10:18:10.564107  738963 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:18:10.566128  738963 out.go:252]   - Generating certificates and keys ...
	I1101 10:18:10.566232  738963 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:18:10.566320  738963 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:18:10.566418  738963 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:18:10.566501  738963 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:18:10.566589  738963 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:18:10.566680  738963 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:18:10.566791  738963 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:18:10.567013  738963 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-556573] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 10:18:10.567096  738963 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:18:10.567285  738963 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-556573] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 10:18:10.567380  738963 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:18:10.567475  738963 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:18:10.567546  738963 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:18:10.567626  738963 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:18:10.567708  738963 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:18:10.567799  738963 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:18:10.567915  738963 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:18:10.568011  738963 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:18:10.568135  738963 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:18:10.568225  738963 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:18:10.569340  738963 out.go:252]   - Booting up control plane ...
	I1101 10:18:10.569465  738963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:18:10.569587  738963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:18:10.569683  738963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:18:10.569855  738963 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:18:10.570001  738963 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:18:10.570068  738963 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:18:10.570278  738963 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 10:18:10.570410  738963 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.502593 seconds
	I1101 10:18:10.570557  738963 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:18:10.570730  738963 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:18:10.570809  738963 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:18:10.571102  738963 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-556573 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:18:10.571192  738963 kubeadm.go:319] [bootstrap-token] Using token: a2tmz3.w8jg1dq1lgatlgyo
	I1101 10:18:10.572772  738963 out.go:252]   - Configuring RBAC rules ...
	I1101 10:18:10.572931  738963 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:18:10.573037  738963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:18:10.573204  738963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:18:10.573356  738963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:18:10.573536  738963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:18:10.573675  738963 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:18:10.573828  738963 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:18:10.573900  738963 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:18:10.573953  738963 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:18:10.573968  738963 kubeadm.go:319] 
	I1101 10:18:10.574059  738963 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:18:10.574072  738963 kubeadm.go:319] 
	I1101 10:18:10.574172  738963 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:18:10.574179  738963 kubeadm.go:319] 
	I1101 10:18:10.574235  738963 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:18:10.574337  738963 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:18:10.574421  738963 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:18:10.574430  738963 kubeadm.go:319] 
	I1101 10:18:10.574512  738963 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:18:10.574521  738963 kubeadm.go:319] 
	I1101 10:18:10.574584  738963 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:18:10.574597  738963 kubeadm.go:319] 
	I1101 10:18:10.574675  738963 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:18:10.574779  738963 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:18:10.574915  738963 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:18:10.574926  738963 kubeadm.go:319] 
	I1101 10:18:10.575060  738963 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:18:10.575181  738963 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:18:10.575199  738963 kubeadm.go:319] 
	I1101 10:18:10.575357  738963 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a2tmz3.w8jg1dq1lgatlgyo \
	I1101 10:18:10.575516  738963 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 \
	I1101 10:18:10.575548  738963 kubeadm.go:319] 	--control-plane 
	I1101 10:18:10.575560  738963 kubeadm.go:319] 
	I1101 10:18:10.575686  738963 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:18:10.575695  738963 kubeadm.go:319] 
	I1101 10:18:10.575814  738963 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a2tmz3.w8jg1dq1lgatlgyo \
	I1101 10:18:10.575995  738963 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 
	I1101 10:18:10.576015  738963 cni.go:84] Creating CNI manager for ""
	I1101 10:18:10.576027  738963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:18:10.577402  738963 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:18:10.580112  738963 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:18:10.586359  738963 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1101 10:18:10.586380  738963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:18:10.604279  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:18:11.392799  738963 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:18:11.392957  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:11.392996  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-556573 minikube.k8s.io/updated_at=2025_11_01T10_18_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=old-k8s-version-556573 minikube.k8s.io/primary=true
	I1101 10:18:11.404984  738963 ops.go:34] apiserver oom_adj: -16
	I1101 10:18:11.493929  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:11.994991  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:12.494166  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:12.993981  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:13.494969  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:10.301410  740314 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.689215828s)
	I1101 10:18:10.301445  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1101 10:18:10.301480  740314 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:18:10.301544  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:18:10.932918  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 10:18:10.932975  740314 cache_images.go:125] Successfully loaded all cached images
	I1101 10:18:10.932982  740314 cache_images.go:94] duration metric: took 11.648498761s to LoadCachedImages
	I1101 10:18:10.933000  740314 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:18:10.933148  740314 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-680879 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:18:10.933315  740314 ssh_runner.go:195] Run: crio config
	I1101 10:18:10.982286  740314 cni.go:84] Creating CNI manager for ""
	I1101 10:18:10.982308  740314 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:18:10.982326  740314 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:18:10.982352  740314 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-680879 NodeName:no-preload-680879 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:18:10.982500  740314 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-680879"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:18:10.982575  740314 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:18:10.991722  740314 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1101 10:18:10.991775  740314 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1101 10:18:11.000719  740314 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1101 10:18:11.000782  740314 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1101 10:18:11.000808  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1101 10:18:11.000829  740314 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1101 10:18:11.005211  740314 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1101 10:18:11.005242  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1101 10:18:12.378760  740314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:18:12.393536  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1101 10:18:12.398283  740314 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1101 10:18:12.398324  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1101 10:18:12.653488  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1101 10:18:12.658271  740314 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1101 10:18:12.658314  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1101 10:18:12.835512  740314 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:18:12.844449  740314 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:18:12.858210  740314 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:18:12.874634  740314 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1101 10:18:12.888825  740314 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:18:12.892966  740314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:18:12.903912  740314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:18:12.986457  740314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:18:13.011948  740314 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879 for IP: 192.168.85.2
	I1101 10:18:13.011980  740314 certs.go:195] generating shared ca certs ...
	I1101 10:18:13.012007  740314 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:13.012202  740314 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:18:13.012263  740314 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:18:13.012276  740314 certs.go:257] generating profile certs ...
	I1101 10:18:13.012343  740314 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.key
	I1101 10:18:13.012374  740314 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt with IP's: []
	I1101 10:18:13.195814  740314 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt ...
	I1101 10:18:13.195869  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt: {Name:mk67b702ea5503c66efd1bd87a0c98646d7640ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:13.196068  740314 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.key ...
	I1101 10:18:13.196087  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.key: {Name:mkc60edbc2b1463c81ab8781aca273c413ceaa90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:13.196212  740314 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key.0ccb300d
	I1101 10:18:13.196233  740314 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt.0ccb300d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 10:18:13.484150  740314 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt.0ccb300d ...
	I1101 10:18:13.484193  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt.0ccb300d: {Name:mk661ef05477b162b65c9212fe9778e04d74403d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:13.484407  740314 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key.0ccb300d ...
	I1101 10:18:13.484430  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key.0ccb300d: {Name:mk4a7ae58d6bfc52b3ce47998c0eb69bf2cee6a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:13.484579  740314 certs.go:382] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt.0ccb300d -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt
	I1101 10:18:13.484682  740314 certs.go:386] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key.0ccb300d -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key
	I1101 10:18:13.484767  740314 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.key
	I1101 10:18:13.484791  740314 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.crt with IP's: []
	I1101 10:18:14.224353  740314 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.crt ...
	I1101 10:18:14.224393  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.crt: {Name:mk145b341e88e9e42f976d5f15bd79401a807fe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:14.224644  740314 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.key ...
	I1101 10:18:14.224664  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.key: {Name:mk392520b68e41d3d7e442fe2e4ed6bf585db2eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:14.224921  740314 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:18:14.224974  740314 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:18:14.224991  740314 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:18:14.225022  740314 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:18:14.225052  740314 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:18:14.225086  740314 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:18:14.225143  740314 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:18:14.225905  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:18:14.246222  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:18:14.264983  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:18:14.283717  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:18:14.302280  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:18:14.321258  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:18:10.012047  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:10.012547  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:10.511997  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:13.994533  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:14.494694  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:14.994700  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:15.494749  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:15.994770  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:16.494735  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:16.994962  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:17.494885  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:17.995628  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:18.494068  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:14.340408  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:18:14.359345  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:18:14.377885  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:18:14.398705  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:18:14.417804  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:18:14.436902  740314 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:18:14.451161  740314 ssh_runner.go:195] Run: openssl version
	I1101 10:18:14.458076  740314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:18:14.468144  740314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:18:14.472447  740314 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:18:14.472520  740314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:18:14.514250  740314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:18:14.524433  740314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:18:14.534351  740314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:18:14.538753  740314 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:18:14.538819  740314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:18:14.579706  740314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:18:14.589878  740314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:18:14.599447  740314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:18:14.603568  740314 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:18:14.603691  740314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:18:14.640281  740314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:18:14.649758  740314 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:18:14.653828  740314 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:18:14.653919  740314 kubeadm.go:401] StartCluster: {Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:18:14.654020  740314 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:18:14.654081  740314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:18:14.684951  740314 cri.go:89] found id: ""
	I1101 10:18:14.685028  740314 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:18:14.694025  740314 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:18:14.705300  740314 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:18:14.705358  740314 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:18:14.713961  740314 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:18:14.713981  740314 kubeadm.go:158] found existing configuration files:
	
	I1101 10:18:14.714023  740314 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:18:14.722639  740314 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:18:14.722695  740314 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:18:14.730701  740314 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:18:14.739183  740314 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:18:14.739233  740314 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:18:14.747740  740314 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:18:14.756348  740314 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:18:14.756413  740314 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:18:14.764415  740314 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:18:14.772970  740314 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:18:14.773057  740314 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:18:14.781255  740314 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:18:14.839290  740314 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 10:18:14.896739  740314 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:18:15.513068  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:15.513129  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:18.994935  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:19.494014  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:19.994641  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:20.494597  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:20.994934  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:21.494099  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:21.994075  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:22.494059  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:22.578536  738963 kubeadm.go:1114] duration metric: took 11.18565038s to wait for elevateKubeSystemPrivileges
	I1101 10:18:22.578576  738963 kubeadm.go:403] duration metric: took 22.539398327s to StartCluster
	I1101 10:18:22.578602  738963 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:22.578690  738963 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:18:22.579984  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:22.580235  738963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:18:22.580246  738963 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:18:22.580338  738963 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:18:22.580452  738963 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-556573"
	I1101 10:18:22.580461  738963 config.go:182] Loaded profile config "old-k8s-version-556573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:18:22.580470  738963 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-556573"
	I1101 10:18:22.580500  738963 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-556573"
	I1101 10:18:22.580474  738963 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-556573"
	I1101 10:18:22.580727  738963 host.go:66] Checking if "old-k8s-version-556573" exists ...
	I1101 10:18:22.580985  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:18:22.581324  738963 out.go:179] * Verifying Kubernetes components...
	I1101 10:18:22.581511  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:18:22.582749  738963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:18:22.610747  738963 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-556573"
	I1101 10:18:22.610809  738963 host.go:66] Checking if "old-k8s-version-556573" exists ...
	I1101 10:18:22.611749  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:18:22.614141  738963 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:22.615053  738963 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:18:22.615085  738963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:18:22.615153  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:18:22.645273  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:18:22.649323  738963 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:18:22.649351  738963 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:18:22.649425  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:18:22.675603  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:18:22.691971  738963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:18:22.741757  738963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:18:22.773809  738963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:18:22.801962  738963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:18:22.939018  738963 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1101 10:18:22.940339  738963 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-556573" to be "Ready" ...
	I1101 10:18:23.162439  738963 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:18:23.163315  738963 addons.go:515] duration metric: took 582.971716ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:18:23.443141  738963 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-556573" context rescaled to 1 replicas
	I1101 10:18:24.485587  740314 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:18:24.485650  740314 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:18:24.485767  740314 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:18:24.485894  740314 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 10:18:24.485942  740314 kubeadm.go:319] OS: Linux
	I1101 10:18:24.485997  740314 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:18:24.486057  740314 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:18:24.486128  740314 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:18:24.486190  740314 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:18:24.486260  740314 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:18:24.486306  740314 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:18:24.486351  740314 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:18:24.486389  740314 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 10:18:24.486489  740314 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:18:24.486629  740314 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:18:24.486766  740314 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:18:24.486864  740314 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:18:24.488111  740314 out.go:252]   - Generating certificates and keys ...
	I1101 10:18:24.488202  740314 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:18:24.488277  740314 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:18:24.488356  740314 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:18:24.488457  740314 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:18:24.488524  740314 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:18:24.488586  740314 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:18:24.488648  740314 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:18:24.488750  740314 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-680879] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:18:24.488802  740314 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:18:24.488934  740314 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-680879] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:18:24.489028  740314 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:18:24.489135  740314 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:18:24.489215  740314 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:18:24.489273  740314 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:18:24.489318  740314 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:18:24.489396  740314 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:18:24.489486  740314 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:18:24.489547  740314 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:18:24.489594  740314 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:18:24.489686  740314 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:18:24.489774  740314 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:18:24.491008  740314 out.go:252]   - Booting up control plane ...
	I1101 10:18:24.491102  740314 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:18:24.491215  740314 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:18:24.491319  740314 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:18:24.491443  740314 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:18:24.491523  740314 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:18:24.491624  740314 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:18:24.491711  740314 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:18:24.491759  740314 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:18:24.491918  740314 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:18:24.492017  740314 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:18:24.492072  740314 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001607231s
	I1101 10:18:24.492150  740314 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:18:24.492227  740314 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 10:18:24.492303  740314 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:18:24.492370  740314 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:18:24.492447  740314 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.621712454s
	I1101 10:18:24.492514  740314 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.104922609s
	I1101 10:18:24.492575  740314 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001612362s
	I1101 10:18:24.492675  740314 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:18:24.492796  740314 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:18:24.492910  740314 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:18:24.493150  740314 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-680879 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:18:24.493205  740314 kubeadm.go:319] [bootstrap-token] Using token: psgks8.xzghorqz7mq8617s
	I1101 10:18:24.494307  740314 out.go:252]   - Configuring RBAC rules ...
	I1101 10:18:24.494396  740314 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:18:24.494472  740314 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:18:24.494626  740314 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:18:24.494738  740314 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:18:24.494865  740314 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:18:24.494943  740314 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:18:24.495054  740314 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:18:24.495099  740314 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:18:24.495139  740314 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:18:24.495145  740314 kubeadm.go:319] 
	I1101 10:18:24.495195  740314 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:18:24.495201  740314 kubeadm.go:319] 
	I1101 10:18:24.495271  740314 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:18:24.495276  740314 kubeadm.go:319] 
	I1101 10:18:24.495297  740314 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:18:24.495360  740314 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:18:24.495408  740314 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:18:24.495414  740314 kubeadm.go:319] 
	I1101 10:18:24.495467  740314 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:18:24.495473  740314 kubeadm.go:319] 
	I1101 10:18:24.495541  740314 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:18:24.495557  740314 kubeadm.go:319] 
	I1101 10:18:24.495633  740314 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:18:24.495735  740314 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:18:24.495832  740314 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:18:24.495851  740314 kubeadm.go:319] 
	I1101 10:18:24.495967  740314 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:18:24.496041  740314 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:18:24.496051  740314 kubeadm.go:319] 
	I1101 10:18:24.496123  740314 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token psgks8.xzghorqz7mq8617s \
	I1101 10:18:24.496226  740314 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 \
	I1101 10:18:24.496259  740314 kubeadm.go:319] 	--control-plane 
	I1101 10:18:24.496268  740314 kubeadm.go:319] 
	I1101 10:18:24.496356  740314 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:18:24.496363  740314 kubeadm.go:319] 
	I1101 10:18:24.496434  740314 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token psgks8.xzghorqz7mq8617s \
	I1101 10:18:24.496561  740314 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 
	I1101 10:18:24.496580  740314 cni.go:84] Creating CNI manager for ""
	I1101 10:18:24.496589  740314 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:18:24.497695  740314 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:18:20.513640  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:20.513705  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1101 10:18:24.945133  738963 node_ready.go:57] node "old-k8s-version-556573" has "Ready":"False" status (will retry)
	W1101 10:18:27.443607  738963 node_ready.go:57] node "old-k8s-version-556573" has "Ready":"False" status (will retry)
	I1101 10:18:24.498571  740314 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:18:24.503506  740314 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:18:24.503527  740314 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:18:24.518226  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:18:24.784362  740314 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:18:24.784497  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-680879 minikube.k8s.io/updated_at=2025_11_01T10_18_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=no-preload-680879 minikube.k8s.io/primary=true
	I1101 10:18:24.784578  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:24.877234  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:24.877234  740314 ops.go:34] apiserver oom_adj: -16
	I1101 10:18:25.378064  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:25.878239  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:26.377566  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:26.878034  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:27.377406  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:27.877706  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:28.377733  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:28.878109  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:29.377335  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:29.444963  740314 kubeadm.go:1114] duration metric: took 4.660595599s to wait for elevateKubeSystemPrivileges
	I1101 10:18:29.445008  740314 kubeadm.go:403] duration metric: took 14.791108031s to StartCluster
	I1101 10:18:29.445035  740314 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:29.445122  740314 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:18:29.446569  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:29.446869  740314 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:18:29.446907  740314 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:18:29.446960  740314 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:18:29.447067  740314 config.go:182] Loaded profile config "no-preload-680879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:18:29.447081  740314 addons.go:70] Setting storage-provisioner=true in profile "no-preload-680879"
	I1101 10:18:29.447099  740314 addons.go:70] Setting default-storageclass=true in profile "no-preload-680879"
	I1101 10:18:29.447132  740314 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-680879"
	I1101 10:18:29.447103  740314 addons.go:239] Setting addon storage-provisioner=true in "no-preload-680879"
	I1101 10:18:29.447269  740314 host.go:66] Checking if "no-preload-680879" exists ...
	I1101 10:18:29.447579  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:18:29.447731  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:18:29.450386  740314 out.go:179] * Verifying Kubernetes components...
	I1101 10:18:29.451973  740314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:18:29.470998  740314 addons.go:239] Setting addon default-storageclass=true in "no-preload-680879"
	I1101 10:18:29.471050  740314 host.go:66] Checking if "no-preload-680879" exists ...
	I1101 10:18:29.471186  740314 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:25.514207  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:25.514279  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:29.471534  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:18:29.472271  740314 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:18:29.472292  740314 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:18:29.472361  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:18:29.495730  740314 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:18:29.495764  740314 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:18:29.495853  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:18:29.496179  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:18:29.519292  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:18:29.542601  740314 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:18:29.596581  740314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:18:29.616447  740314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:18:29.638600  740314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:18:29.720335  740314 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 10:18:29.721743  740314 node_ready.go:35] waiting up to 6m0s for node "no-preload-680879" to be "Ready" ...
	I1101 10:18:29.921096  740314 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1101 10:18:29.943881  738963 node_ready.go:57] node "old-k8s-version-556573" has "Ready":"False" status (will retry)
	W1101 10:18:31.944091  738963 node_ready.go:57] node "old-k8s-version-556573" has "Ready":"False" status (will retry)
	I1101 10:18:29.921890  740314 addons.go:515] duration metric: took 474.941164ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:18:30.224852  740314 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-680879" context rescaled to 1 replicas
	W1101 10:18:31.725627  740314 node_ready.go:57] node "no-preload-680879" has "Ready":"False" status (will retry)
	W1101 10:18:34.225341  740314 node_ready.go:57] node "no-preload-680879" has "Ready":"False" status (will retry)
	I1101 10:18:30.514666  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:30.514725  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:31.687728  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:58220->192.168.103.2:8443: read: connection reset by peer
	I1101 10:18:31.687782  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:31.688197  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:32.011513  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:32.011967  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:32.511614  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:32.512160  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:33.011829  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:33.012321  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:33.512045  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:33.512481  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:34.012263  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:34.012761  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:34.512476  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:34.513005  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	W1101 10:18:34.443797  738963 node_ready.go:57] node "old-k8s-version-556573" has "Ready":"False" status (will retry)
	I1101 10:18:36.443695  738963 node_ready.go:49] node "old-k8s-version-556573" is "Ready"
	I1101 10:18:36.443732  738963 node_ready.go:38] duration metric: took 13.503361146s for node "old-k8s-version-556573" to be "Ready" ...
	I1101 10:18:36.443750  738963 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:18:36.443815  738963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:18:36.456387  738963 api_server.go:72] duration metric: took 13.876100443s to wait for apiserver process to appear ...
	I1101 10:18:36.456422  738963 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:18:36.456456  738963 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 10:18:36.460765  738963 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 10:18:36.461998  738963 api_server.go:141] control plane version: v1.28.0
	I1101 10:18:36.462033  738963 api_server.go:131] duration metric: took 5.60277ms to wait for apiserver health ...
	I1101 10:18:36.462042  738963 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:18:36.465787  738963 system_pods.go:59] 8 kube-system pods found
	I1101 10:18:36.465866  738963 system_pods.go:61] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:36.465882  738963 system_pods.go:61] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running
	I1101 10:18:36.465893  738963 system_pods.go:61] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running
	I1101 10:18:36.465899  738963 system_pods.go:61] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running
	I1101 10:18:36.465909  738963 system_pods.go:61] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running
	I1101 10:18:36.465914  738963 system_pods.go:61] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running
	I1101 10:18:36.465920  738963 system_pods.go:61] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running
	I1101 10:18:36.465930  738963 system_pods.go:61] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:36.465946  738963 system_pods.go:74] duration metric: took 3.896458ms to wait for pod list to return data ...
	I1101 10:18:36.465961  738963 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:18:36.467989  738963 default_sa.go:45] found service account: "default"
	I1101 10:18:36.468011  738963 default_sa.go:55] duration metric: took 2.042477ms for default service account to be created ...
	I1101 10:18:36.468020  738963 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:18:36.471293  738963 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:36.471329  738963 system_pods.go:89] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:36.471338  738963 system_pods.go:89] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running
	I1101 10:18:36.471351  738963 system_pods.go:89] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running
	I1101 10:18:36.471357  738963 system_pods.go:89] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running
	I1101 10:18:36.471363  738963 system_pods.go:89] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running
	I1101 10:18:36.471368  738963 system_pods.go:89] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running
	I1101 10:18:36.471381  738963 system_pods.go:89] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running
	I1101 10:18:36.471393  738963 system_pods.go:89] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:36.471434  738963 retry.go:31] will retry after 192.603663ms: missing components: kube-dns
	I1101 10:18:36.668587  738963 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:36.668642  738963 system_pods.go:89] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:36.668651  738963 system_pods.go:89] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running
	I1101 10:18:36.668659  738963 system_pods.go:89] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running
	I1101 10:18:36.668665  738963 system_pods.go:89] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running
	I1101 10:18:36.668671  738963 system_pods.go:89] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running
	I1101 10:18:36.668676  738963 system_pods.go:89] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running
	I1101 10:18:36.668686  738963 system_pods.go:89] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running
	I1101 10:18:36.668697  738963 system_pods.go:89] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:36.668719  738963 retry.go:31] will retry after 277.22195ms: missing components: kube-dns
	I1101 10:18:36.950586  738963 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:36.950645  738963 system_pods.go:89] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:36.950660  738963 system_pods.go:89] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running
	I1101 10:18:36.950669  738963 system_pods.go:89] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running
	I1101 10:18:36.950675  738963 system_pods.go:89] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running
	I1101 10:18:36.950686  738963 system_pods.go:89] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running
	I1101 10:18:36.950691  738963 system_pods.go:89] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running
	I1101 10:18:36.950695  738963 system_pods.go:89] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running
	I1101 10:18:36.950707  738963 system_pods.go:89] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:36.950727  738963 retry.go:31] will retry after 403.084038ms: missing components: kube-dns
	I1101 10:18:37.357668  738963 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:37.357707  738963 system_pods.go:89] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:37.357714  738963 system_pods.go:89] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running
	I1101 10:18:37.357719  738963 system_pods.go:89] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running
	I1101 10:18:37.357723  738963 system_pods.go:89] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running
	I1101 10:18:37.357728  738963 system_pods.go:89] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running
	I1101 10:18:37.357732  738963 system_pods.go:89] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running
	I1101 10:18:37.357735  738963 system_pods.go:89] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running
	I1101 10:18:37.357739  738963 system_pods.go:89] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:37.357760  738963 retry.go:31] will retry after 462.647878ms: missing components: kube-dns
	I1101 10:18:37.825041  738963 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:37.825078  738963 system_pods.go:89] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Running
	I1101 10:18:37.825086  738963 system_pods.go:89] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running
	I1101 10:18:37.825091  738963 system_pods.go:89] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running
	I1101 10:18:37.825098  738963 system_pods.go:89] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running
	I1101 10:18:37.825104  738963 system_pods.go:89] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running
	I1101 10:18:37.825109  738963 system_pods.go:89] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running
	I1101 10:18:37.825115  738963 system_pods.go:89] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running
	I1101 10:18:37.825121  738963 system_pods.go:89] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Running
	I1101 10:18:37.825133  738963 system_pods.go:126] duration metric: took 1.357105468s to wait for k8s-apps to be running ...
	I1101 10:18:37.825147  738963 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:18:37.825208  738963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:18:37.839137  738963 system_svc.go:56] duration metric: took 13.973146ms WaitForService to wait for kubelet
	I1101 10:18:37.839172  738963 kubeadm.go:587] duration metric: took 15.25889387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:18:37.839201  738963 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:18:37.841954  738963 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:18:37.841985  738963 node_conditions.go:123] node cpu capacity is 8
	I1101 10:18:37.841998  738963 node_conditions.go:105] duration metric: took 2.792159ms to run NodePressure ...
	I1101 10:18:37.842012  738963 start.go:242] waiting for startup goroutines ...
	I1101 10:18:37.842021  738963 start.go:247] waiting for cluster config update ...
	I1101 10:18:37.842035  738963 start.go:256] writing updated cluster config ...
	I1101 10:18:37.842333  738963 ssh_runner.go:195] Run: rm -f paused
	I1101 10:18:37.846351  738963 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:18:37.850801  738963 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-cprx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:37.855924  738963 pod_ready.go:94] pod "coredns-5dd5756b68-cprx9" is "Ready"
	I1101 10:18:37.855951  738963 pod_ready.go:86] duration metric: took 5.12496ms for pod "coredns-5dd5756b68-cprx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:37.858621  738963 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:37.862695  738963 pod_ready.go:94] pod "etcd-old-k8s-version-556573" is "Ready"
	I1101 10:18:37.862716  738963 pod_ready.go:86] duration metric: took 4.071246ms for pod "etcd-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:37.865127  738963 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:37.873200  738963 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-556573" is "Ready"
	I1101 10:18:37.873298  738963 pod_ready.go:86] duration metric: took 8.146998ms for pod "kube-apiserver-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:37.883663  738963 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:38.251129  738963 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-556573" is "Ready"
	I1101 10:18:38.251161  738963 pod_ready.go:86] duration metric: took 367.462146ms for pod "kube-controller-manager-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:38.450774  738963 pod_ready.go:83] waiting for pod "kube-proxy-s9fsm" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:18:36.225430  740314 node_ready.go:57] node "no-preload-680879" has "Ready":"False" status (will retry)
	W1101 10:18:38.225569  740314 node_ready.go:57] node "no-preload-680879" has "Ready":"False" status (will retry)
	I1101 10:18:38.850535  738963 pod_ready.go:94] pod "kube-proxy-s9fsm" is "Ready"
	I1101 10:18:38.850562  738963 pod_ready.go:86] duration metric: took 399.759873ms for pod "kube-proxy-s9fsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:39.051414  738963 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:39.450679  738963 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-556573" is "Ready"
	I1101 10:18:39.450709  738963 pod_ready.go:86] duration metric: took 399.266371ms for pod "kube-scheduler-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:39.450721  738963 pod_ready.go:40] duration metric: took 1.604325628s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:18:39.497046  738963 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1101 10:18:39.498415  738963 out.go:203] 
	W1101 10:18:39.499605  738963 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 10:18:39.500655  738963 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:18:39.502040  738963 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-556573" cluster and "default" namespace by default
	I1101 10:18:35.011496  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:35.012087  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:35.511714  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:35.512227  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:36.011482  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:36.011919  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:36.512471  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:36.512951  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:37.011590  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:37.012079  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:37.511714  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:37.512153  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:38.011797  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:38.012334  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:38.512036  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:38.512501  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:39.011766  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:39.012268  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:39.511921  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:39.512385  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	W1101 10:18:40.725182  740314 node_ready.go:57] node "no-preload-680879" has "Ready":"False" status (will retry)
	I1101 10:18:42.724400  740314 node_ready.go:49] node "no-preload-680879" is "Ready"
	I1101 10:18:42.724437  740314 node_ready.go:38] duration metric: took 13.002662095s for node "no-preload-680879" to be "Ready" ...
	I1101 10:18:42.724457  740314 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:18:42.724527  740314 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:18:42.738162  740314 api_server.go:72] duration metric: took 13.291249668s to wait for apiserver process to appear ...
	I1101 10:18:42.738194  740314 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:18:42.738218  740314 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:18:42.742912  740314 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:18:42.744056  740314 api_server.go:141] control plane version: v1.34.1
	I1101 10:18:42.744088  740314 api_server.go:131] duration metric: took 5.886134ms to wait for apiserver health ...
	I1101 10:18:42.744099  740314 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:18:42.748220  740314 system_pods.go:59] 8 kube-system pods found
	I1101 10:18:42.748258  740314 system_pods.go:61] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:42.748267  740314 system_pods.go:61] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running
	I1101 10:18:42.748275  740314 system_pods.go:61] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running
	I1101 10:18:42.748281  740314 system_pods.go:61] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running
	I1101 10:18:42.748287  740314 system_pods.go:61] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running
	I1101 10:18:42.748294  740314 system_pods.go:61] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running
	I1101 10:18:42.748300  740314 system_pods.go:61] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running
	I1101 10:18:42.748307  740314 system_pods.go:61] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:42.748317  740314 system_pods.go:74] duration metric: took 4.210344ms to wait for pod list to return data ...
	I1101 10:18:42.748327  740314 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:18:42.751008  740314 default_sa.go:45] found service account: "default"
	I1101 10:18:42.751029  740314 default_sa.go:55] duration metric: took 2.695361ms for default service account to be created ...
	I1101 10:18:42.751046  740314 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:18:42.753639  740314 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:42.753665  740314 system_pods.go:89] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:42.753671  740314 system_pods.go:89] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running
	I1101 10:18:42.753677  740314 system_pods.go:89] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running
	I1101 10:18:42.753689  740314 system_pods.go:89] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running
	I1101 10:18:42.753694  740314 system_pods.go:89] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running
	I1101 10:18:42.753698  740314 system_pods.go:89] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running
	I1101 10:18:42.753703  740314 system_pods.go:89] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running
	I1101 10:18:42.753710  740314 system_pods.go:89] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:42.753741  740314 retry.go:31] will retry after 211.09158ms: missing components: kube-dns
	I1101 10:18:42.968806  740314 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:42.968858  740314 system_pods.go:89] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:42.968866  740314 system_pods.go:89] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running
	I1101 10:18:42.968873  740314 system_pods.go:89] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running
	I1101 10:18:42.968877  740314 system_pods.go:89] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running
	I1101 10:18:42.968883  740314 system_pods.go:89] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running
	I1101 10:18:42.968886  740314 system_pods.go:89] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running
	I1101 10:18:42.968890  740314 system_pods.go:89] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running
	I1101 10:18:42.968894  740314 system_pods.go:89] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:42.968914  740314 retry.go:31] will retry after 274.560478ms: missing components: kube-dns
	I1101 10:18:43.248096  740314 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:43.248134  740314 system_pods.go:89] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:43.248140  740314 system_pods.go:89] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running
	I1101 10:18:43.248145  740314 system_pods.go:89] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running
	I1101 10:18:43.248149  740314 system_pods.go:89] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running
	I1101 10:18:43.248152  740314 system_pods.go:89] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running
	I1101 10:18:43.248157  740314 system_pods.go:89] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running
	I1101 10:18:43.248160  740314 system_pods.go:89] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running
	I1101 10:18:43.248165  740314 system_pods.go:89] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:43.248181  740314 retry.go:31] will retry after 293.247064ms: missing components: kube-dns
	I1101 10:18:43.545044  740314 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:43.545077  740314 system_pods.go:89] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:43.545082  740314 system_pods.go:89] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running
	I1101 10:18:43.545088  740314 system_pods.go:89] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running
	I1101 10:18:43.545092  740314 system_pods.go:89] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running
	I1101 10:18:43.545097  740314 system_pods.go:89] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running
	I1101 10:18:43.545100  740314 system_pods.go:89] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running
	I1101 10:18:43.545104  740314 system_pods.go:89] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running
	I1101 10:18:43.545108  740314 system_pods.go:89] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:43.545126  740314 retry.go:31] will retry after 576.006416ms: missing components: kube-dns
	I1101 10:18:44.125748  740314 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:44.125781  740314 system_pods.go:89] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Running
	I1101 10:18:44.125787  740314 system_pods.go:89] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running
	I1101 10:18:44.125790  740314 system_pods.go:89] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running
	I1101 10:18:44.125794  740314 system_pods.go:89] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running
	I1101 10:18:44.125798  740314 system_pods.go:89] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running
	I1101 10:18:44.125801  740314 system_pods.go:89] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running
	I1101 10:18:44.125804  740314 system_pods.go:89] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running
	I1101 10:18:44.125807  740314 system_pods.go:89] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Running
	I1101 10:18:44.125814  740314 system_pods.go:126] duration metric: took 1.374763735s to wait for k8s-apps to be running ...
	I1101 10:18:44.125822  740314 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:18:44.125905  740314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:18:44.140637  740314 system_svc.go:56] duration metric: took 14.798364ms WaitForService to wait for kubelet
	I1101 10:18:44.140680  740314 kubeadm.go:587] duration metric: took 14.693774339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:18:44.140705  740314 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:18:44.144140  740314 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:18:44.144168  740314 node_conditions.go:123] node cpu capacity is 8
	I1101 10:18:44.144185  740314 node_conditions.go:105] duration metric: took 3.47573ms to run NodePressure ...
	I1101 10:18:44.144199  740314 start.go:242] waiting for startup goroutines ...
	I1101 10:18:44.144207  740314 start.go:247] waiting for cluster config update ...
	I1101 10:18:44.144218  740314 start.go:256] writing updated cluster config ...
	I1101 10:18:44.144512  740314 ssh_runner.go:195] Run: rm -f paused
	I1101 10:18:44.149407  740314 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:18:44.153616  740314 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rh4z7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.158401  740314 pod_ready.go:94] pod "coredns-66bc5c9577-rh4z7" is "Ready"
	I1101 10:18:44.158432  740314 pod_ready.go:86] duration metric: took 4.788284ms for pod "coredns-66bc5c9577-rh4z7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.160661  740314 pod_ready.go:83] waiting for pod "etcd-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.164804  740314 pod_ready.go:94] pod "etcd-no-preload-680879" is "Ready"
	I1101 10:18:44.164832  740314 pod_ready.go:86] duration metric: took 4.144476ms for pod "etcd-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.167110  740314 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.171310  740314 pod_ready.go:94] pod "kube-apiserver-no-preload-680879" is "Ready"
	I1101 10:18:44.171343  740314 pod_ready.go:86] duration metric: took 4.207095ms for pod "kube-apiserver-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.173299  740314 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:40.012410  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:40.012862  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:40.511494  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:40.511911  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:41.012482  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:41.013003  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:41.511510  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:41.512034  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:42.011550  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:42.012069  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:42.511712  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:44.553594  740314 pod_ready.go:94] pod "kube-controller-manager-no-preload-680879" is "Ready"
	I1101 10:18:44.553624  740314 pod_ready.go:86] duration metric: took 380.299059ms for pod "kube-controller-manager-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.754182  740314 pod_ready.go:83] waiting for pod "kube-proxy-ft2vw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:45.154860  740314 pod_ready.go:94] pod "kube-proxy-ft2vw" is "Ready"
	I1101 10:18:45.154888  740314 pod_ready.go:86] duration metric: took 400.675768ms for pod "kube-proxy-ft2vw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:45.354323  740314 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:45.753827  740314 pod_ready.go:94] pod "kube-scheduler-no-preload-680879" is "Ready"
	I1101 10:18:45.753888  740314 pod_ready.go:86] duration metric: took 399.537997ms for pod "kube-scheduler-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:45.753906  740314 pod_ready.go:40] duration metric: took 1.60445952s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:18:45.800059  740314 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:18:45.801658  740314 out.go:179] * Done! kubectl is now configured to use "no-preload-680879" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:18:36 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:36.687022857Z" level=info msg="Started container" PID=2126 containerID=33c9542b75e2711f7a85bc1cb63cdd550af760051ddcc8ea9e26e4e2f36a575a description=kube-system/storage-provisioner/storage-provisioner id=06054cc9-d2fd-4c74-9398-c5ad79874be9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=446178e16fe57c4d2daf13d1b8f942f16ded6d86e1067ac5a9c7488b55c71591
	Nov 01 10:18:36 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:36.687324898Z" level=info msg="Started container" PID=2129 containerID=bae84618c88f0dd0bfc119592ed5f18b7e0c7b8b63ace94b3db6963dfaaaa477 description=kube-system/coredns-5dd5756b68-cprx9/coredns id=9f7215a4-48d8-452c-b448-523868202546 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a7b87c8c9150a74b0752f938a7f459e6acde9e75da85b56277d72c0a20e7cb4b
	Nov 01 10:18:39 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:39.959428268Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f1db41ec-c8ed-47dc-8f01-9a37a95c9a1c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:18:39 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:39.959536586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:18:39 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:39.964239433Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fe13a7292568a6fc0184cce51559ce408edd58dd08bdeca3f4675c9ef611a89a UID:44ef04e3-c9bd-4265-88b9-680b1e522491 NetNS:/var/run/netns/d88db074-5db6-46e8-bce1-7137a3ccb79b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aef0}] Aliases:map[]}"
	Nov 01 10:18:39 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:39.964277729Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:18:39 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:39.974098198Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fe13a7292568a6fc0184cce51559ce408edd58dd08bdeca3f4675c9ef611a89a UID:44ef04e3-c9bd-4265-88b9-680b1e522491 NetNS:/var/run/netns/d88db074-5db6-46e8-bce1-7137a3ccb79b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aef0}] Aliases:map[]}"
	Nov 01 10:18:39 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:39.974241616Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:18:39 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:39.97509498Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:18:39 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:39.975879196Z" level=info msg="Ran pod sandbox fe13a7292568a6fc0184cce51559ce408edd58dd08bdeca3f4675c9ef611a89a with infra container: default/busybox/POD" id=f1db41ec-c8ed-47dc-8f01-9a37a95c9a1c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:18:39 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:39.976949715Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c0e926d7-44dd-4710-8520-1c63b356f3f3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:18:39 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:39.977077671Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c0e926d7-44dd-4710-8520-1c63b356f3f3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:18:39 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:39.977110152Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c0e926d7-44dd-4710-8520-1c63b356f3f3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:18:39 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:39.977539404Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9189e0ae-d74b-44fb-ac69-d280d11df4da name=/runtime.v1.ImageService/PullImage
	Nov 01 10:18:39 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:39.978937588Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:18:42 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:42.101716358Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9189e0ae-d74b-44fb-ac69-d280d11df4da name=/runtime.v1.ImageService/PullImage
	Nov 01 10:18:42 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:42.102779761Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6148a925-a332-4b10-a5b7-331396fa0846 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:18:42 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:42.104689157Z" level=info msg="Creating container: default/busybox/busybox" id=939d5a47-01d8-4d9a-9b77-86b3c06d4b89 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:18:42 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:42.104878967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:18:42 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:42.109451091Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:18:42 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:42.109950819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:18:42 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:42.143237374Z" level=info msg="Created container af735e0e39d8aa9dc5a2f94e5e2e15751551a6c2c42ce93fdcc42c7e2959229e: default/busybox/busybox" id=939d5a47-01d8-4d9a-9b77-86b3c06d4b89 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:18:42 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:42.143912628Z" level=info msg="Starting container: af735e0e39d8aa9dc5a2f94e5e2e15751551a6c2c42ce93fdcc42c7e2959229e" id=d83aa97b-687d-4aff-95dd-5a48b88fcc59 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:18:42 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:42.146145977Z" level=info msg="Started container" PID=2204 containerID=af735e0e39d8aa9dc5a2f94e5e2e15751551a6c2c42ce93fdcc42c7e2959229e description=default/busybox/busybox id=d83aa97b-687d-4aff-95dd-5a48b88fcc59 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fe13a7292568a6fc0184cce51559ce408edd58dd08bdeca3f4675c9ef611a89a
	Nov 01 10:18:48 old-k8s-version-556573 crio[779]: time="2025-11-01T10:18:48.742950875Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	af735e0e39d8a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   fe13a7292568a       busybox                                          default
	bae84618c88f0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   a7b87c8c9150a       coredns-5dd5756b68-cprx9                         kube-system
	33c9542b75e27       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   446178e16fe57       storage-provisioner                              kube-system
	7b230b13dc33c       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   1ea0a5fd3f45f       kindnet-cmzcq                                    kube-system
	6f04cc3101f61       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   3515f63831f25       kube-proxy-s9fsm                                 kube-system
	698b18336ecf1       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   cc08810de19ad       kube-controller-manager-old-k8s-version-556573   kube-system
	f6e7ee7c75537       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   191d7c4f61faa       kube-apiserver-old-k8s-version-556573            kube-system
	73049c47e430d       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   cc85e7283162b       kube-scheduler-old-k8s-version-556573            kube-system
	c5eae9b752988       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   b86a8e9eb859c       etcd-old-k8s-version-556573                      kube-system
	
	
	==> coredns [bae84618c88f0dd0bfc119592ed5f18b7e0c7b8b63ace94b3db6963dfaaaa477] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59840 - 31182 "HINFO IN 8155382680062122066.4632656676609682285. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023069497s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-556573
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-556573
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=old-k8s-version-556573
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_18_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:18:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-556573
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:18:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:18:41 +0000   Sat, 01 Nov 2025 10:18:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:18:41 +0000   Sat, 01 Nov 2025 10:18:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:18:41 +0000   Sat, 01 Nov 2025 10:18:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:18:41 +0000   Sat, 01 Nov 2025 10:18:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-556573
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                684343d3-91b0-49c0-8416-d6f599882a42
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-cprx9                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-556573                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-cmzcq                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-556573             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-556573    200m (2%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-s9fsm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-556573             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node old-k8s-version-556573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-556573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-556573 event: Registered Node old-k8s-version-556573 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-556573 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [c5eae9b7529888deca042f39702b23a5bb9c5b2a0fccf305220b82e4b844fcc6] <==
	{"level":"info","ts":"2025-11-01T10:18:05.915063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:18:05.915073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-01T10:18:05.915082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-11-01T10:18:05.915089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-01T10:18:05.915897Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:18:05.916411Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:18:05.916436Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-556573 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:18:05.91644Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:18:05.91667Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:18:05.916696Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:18:05.916712Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T10:18:05.916779Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:18:05.916812Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:18:05.917932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-11-01T10:18:05.917941Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-11-01T10:18:09.839229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.802494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-11-01T10:18:09.839337Z","caller":"traceutil/trace.go:171","msg":"trace[622206109] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:214; }","duration":"159.924566ms","start":"2025-11-01T10:18:09.679385Z","end":"2025-11-01T10:18:09.83931Z","steps":["trace[622206109] 'range keys from in-memory index tree'  (duration: 159.693349ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:18:10.135205Z","caller":"traceutil/trace.go:171","msg":"trace[1827837113] transaction","detail":"{read_only:false; response_revision:216; number_of_response:1; }","duration":"289.793684ms","start":"2025-11-01T10:18:09.845386Z","end":"2025-11-01T10:18:10.135179Z","steps":["trace[1827837113] 'process raft request'  (duration: 247.339999ms)","trace[1827837113] 'compare'  (duration: 42.274984ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:18:10.135215Z","caller":"traceutil/trace.go:171","msg":"trace[1118656271] linearizableReadLoop","detail":"{readStateIndex:221; appliedIndex:220; }","duration":"217.717847ms","start":"2025-11-01T10:18:09.917472Z","end":"2025-11-01T10:18:10.135189Z","steps":["trace[1118656271] 'read index received'  (duration: 175.30015ms)","trace[1118656271] 'applied index is now lower than readState.Index'  (duration: 42.415651ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:18:10.135602Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.11263ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-after-finished-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:18:10.135664Z","caller":"traceutil/trace.go:171","msg":"trace[1303818265] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-after-finished-controller; range_end:; response_count:0; response_revision:216; }","duration":"218.205967ms","start":"2025-11-01T10:18:09.917433Z","end":"2025-11-01T10:18:10.135639Z","steps":["trace[1303818265] 'agreement among raft nodes before linearized reading'  (duration: 217.854177ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:18:20.948181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.369123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2025-11-01T10:18:20.94824Z","caller":"traceutil/trace.go:171","msg":"trace[1256484993] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler; range_end:; response_count:1; response_revision:314; }","duration":"109.476465ms","start":"2025-11-01T10:18:20.838751Z","end":"2025-11-01T10:18:20.948227Z","steps":["trace[1256484993] 'range keys from in-memory index tree'  (duration: 109.245873ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:18:21.142257Z","caller":"traceutil/trace.go:171","msg":"trace[1035981506] transaction","detail":"{read_only:false; response_revision:317; number_of_response:1; }","duration":"118.291645ms","start":"2025-11-01T10:18:21.023945Z","end":"2025-11-01T10:18:21.142237Z","steps":["trace[1035981506] 'process raft request'  (duration: 117.727861ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:18:21.318259Z","caller":"traceutil/trace.go:171","msg":"trace[1907727771] transaction","detail":"{read_only:false; response_revision:318; number_of_response:1; }","duration":"158.243014ms","start":"2025-11-01T10:18:21.159991Z","end":"2025-11-01T10:18:21.318234Z","steps":["trace[1907727771] 'process raft request'  (duration: 94.359466ms)","trace[1907727771] 'compare'  (duration: 63.688973ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:18:50 up  3:01,  0 user,  load average: 4.87, 3.76, 2.79
	Linux old-k8s-version-556573 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7b230b13dc33cb2d0ddf89b2bbb2f043e227776dbed0de9ece557decdbf4694c] <==
	I1101 10:18:25.927612       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:18:25.927882       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 10:18:25.928019       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:18:25.928034       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:18:25.928053       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:18:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:18:26.129320       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:18:26.129372       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:18:26.129385       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:18:26.129538       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:18:26.329731       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:18:26.329759       1 metrics.go:72] Registering metrics
	I1101 10:18:26.329809       1 controller.go:711] "Syncing nftables rules"
	I1101 10:18:36.137912       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:18:36.137983       1 main.go:301] handling current node
	I1101 10:18:46.131627       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:18:46.131659       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f6e7ee7c75537c270acd42386fc683b9075ce3599702c3c8eb6d9f246d31968f] <==
	I1101 10:18:07.141697       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 10:18:07.141707       1 aggregator.go:166] initial CRD sync complete...
	I1101 10:18:07.141719       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 10:18:07.141728       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 10:18:07.141730       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:18:07.141756       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:18:07.141895       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 10:18:07.141976       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 10:18:07.143233       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 10:18:07.153382       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:18:08.046878       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:18:08.051358       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:18:08.051377       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:18:08.506723       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:18:08.546456       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:18:08.657071       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:18:08.662864       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1101 10:18:08.664185       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 10:18:08.668410       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:18:09.082849       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 10:18:10.417526       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 10:18:10.434939       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:18:10.452921       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1101 10:18:22.595112       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 10:18:22.841971       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [698b18336ecf1b28b1f6513b8ff4dec92592aae6241b8c8c84368fa3714f1c68] <==
	I1101 10:18:22.065570       1 shared_informer.go:318] Caches are synced for job
	I1101 10:18:22.110204       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:18:22.137957       1 shared_informer.go:318] Caches are synced for cronjob
	I1101 10:18:22.142417       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:18:22.457802       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:18:22.457849       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 10:18:22.461031       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:18:22.602384       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1101 10:18:22.855524       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-s9fsm"
	I1101 10:18:22.856115       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cmzcq"
	I1101 10:18:22.946342       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xk5sg"
	I1101 10:18:22.954109       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-cprx9"
	I1101 10:18:22.962240       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="361.416296ms"
	I1101 10:18:22.969397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.088671ms"
	I1101 10:18:22.969517       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.271µs"
	I1101 10:18:22.983079       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1101 10:18:22.991338       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-xk5sg"
	I1101 10:18:22.997265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.5337ms"
	I1101 10:18:23.002818       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.49928ms"
	I1101 10:18:23.002977       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.64µs"
	I1101 10:18:36.338912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.975µs"
	I1101 10:18:36.348036       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="129.932µs"
	I1101 10:18:36.988583       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1101 10:18:37.608177       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.515216ms"
	I1101 10:18:37.608303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.723µs"
	
	
	==> kube-proxy [6f04cc3101f61ebf845eee1d456f504d374586ceb332db7e31f65d5d92fdcb04] <==
	I1101 10:18:23.260402       1 server_others.go:69] "Using iptables proxy"
	I1101 10:18:23.270398       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1101 10:18:23.289772       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:18:23.292314       1 server_others.go:152] "Using iptables Proxier"
	I1101 10:18:23.292359       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 10:18:23.292367       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 10:18:23.292393       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 10:18:23.292640       1 server.go:846] "Version info" version="v1.28.0"
	I1101 10:18:23.292657       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:18:23.294123       1 config.go:188] "Starting service config controller"
	I1101 10:18:23.294162       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 10:18:23.294301       1 config.go:315] "Starting node config controller"
	I1101 10:18:23.294335       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 10:18:23.294306       1 config.go:97] "Starting endpoint slice config controller"
	I1101 10:18:23.294357       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 10:18:23.395031       1 shared_informer.go:318] Caches are synced for service config
	I1101 10:18:23.395052       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 10:18:23.395068       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [73049c47e430de8c5ff4f9e68cdaf15b6b4041e9bf862024fdef3d3d53cda026] <==
	W1101 10:18:07.104426       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 10:18:07.105257       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1101 10:18:07.104502       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 10:18:07.105311       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1101 10:18:07.104577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 10:18:07.105364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1101 10:18:07.104650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 10:18:07.105447       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 10:18:07.104818       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 10:18:07.105530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 10:18:07.104915       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 10:18:07.105638       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 10:18:07.104944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 10:18:07.105715       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1101 10:18:07.105040       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 10:18:07.105803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1101 10:18:07.105830       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 10:18:07.105894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 10:18:08.067222       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 10:18:08.067354       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 10:18:08.202170       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 10:18:08.202210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 10:18:08.257955       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 10:18:08.257998       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1101 10:18:08.694879       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:18:21 old-k8s-version-556573 kubelet[1387]: I1101 10:18:21.881486    1387 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:18:22 old-k8s-version-556573 kubelet[1387]: I1101 10:18:22.863487    1387 topology_manager.go:215] "Topology Admit Handler" podUID="308c1bec-8f02-4276-bb6a-4d15f8d53e89" podNamespace="kube-system" podName="kube-proxy-s9fsm"
	Nov 01 10:18:22 old-k8s-version-556573 kubelet[1387]: I1101 10:18:22.865317    1387 topology_manager.go:215] "Topology Admit Handler" podUID="be7200a1-400a-46fa-9832-af04d5ba8826" podNamespace="kube-system" podName="kindnet-cmzcq"
	Nov 01 10:18:22 old-k8s-version-556573 kubelet[1387]: I1101 10:18:22.970046    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be7200a1-400a-46fa-9832-af04d5ba8826-xtables-lock\") pod \"kindnet-cmzcq\" (UID: \"be7200a1-400a-46fa-9832-af04d5ba8826\") " pod="kube-system/kindnet-cmzcq"
	Nov 01 10:18:22 old-k8s-version-556573 kubelet[1387]: I1101 10:18:22.970100    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rb6v\" (UniqueName: \"kubernetes.io/projected/be7200a1-400a-46fa-9832-af04d5ba8826-kube-api-access-9rb6v\") pod \"kindnet-cmzcq\" (UID: \"be7200a1-400a-46fa-9832-af04d5ba8826\") " pod="kube-system/kindnet-cmzcq"
	Nov 01 10:18:22 old-k8s-version-556573 kubelet[1387]: I1101 10:18:22.970137    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be7200a1-400a-46fa-9832-af04d5ba8826-lib-modules\") pod \"kindnet-cmzcq\" (UID: \"be7200a1-400a-46fa-9832-af04d5ba8826\") " pod="kube-system/kindnet-cmzcq"
	Nov 01 10:18:22 old-k8s-version-556573 kubelet[1387]: I1101 10:18:22.970263    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/308c1bec-8f02-4276-bb6a-4d15f8d53e89-xtables-lock\") pod \"kube-proxy-s9fsm\" (UID: \"308c1bec-8f02-4276-bb6a-4d15f8d53e89\") " pod="kube-system/kube-proxy-s9fsm"
	Nov 01 10:18:22 old-k8s-version-556573 kubelet[1387]: I1101 10:18:22.970333    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/308c1bec-8f02-4276-bb6a-4d15f8d53e89-lib-modules\") pod \"kube-proxy-s9fsm\" (UID: \"308c1bec-8f02-4276-bb6a-4d15f8d53e89\") " pod="kube-system/kube-proxy-s9fsm"
	Nov 01 10:18:22 old-k8s-version-556573 kubelet[1387]: I1101 10:18:22.970370    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/be7200a1-400a-46fa-9832-af04d5ba8826-cni-cfg\") pod \"kindnet-cmzcq\" (UID: \"be7200a1-400a-46fa-9832-af04d5ba8826\") " pod="kube-system/kindnet-cmzcq"
	Nov 01 10:18:22 old-k8s-version-556573 kubelet[1387]: I1101 10:18:22.970398    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/308c1bec-8f02-4276-bb6a-4d15f8d53e89-kube-proxy\") pod \"kube-proxy-s9fsm\" (UID: \"308c1bec-8f02-4276-bb6a-4d15f8d53e89\") " pod="kube-system/kube-proxy-s9fsm"
	Nov 01 10:18:22 old-k8s-version-556573 kubelet[1387]: I1101 10:18:22.970436    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch49r\" (UniqueName: \"kubernetes.io/projected/308c1bec-8f02-4276-bb6a-4d15f8d53e89-kube-api-access-ch49r\") pod \"kube-proxy-s9fsm\" (UID: \"308c1bec-8f02-4276-bb6a-4d15f8d53e89\") " pod="kube-system/kube-proxy-s9fsm"
	Nov 01 10:18:23 old-k8s-version-556573 kubelet[1387]: I1101 10:18:23.560893    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-s9fsm" podStartSLOduration=1.560808781 podCreationTimestamp="2025-11-01 10:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:18:23.560621244 +0000 UTC m=+13.175834864" watchObservedRunningTime="2025-11-01 10:18:23.560808781 +0000 UTC m=+13.176022398"
	Nov 01 10:18:26 old-k8s-version-556573 kubelet[1387]: I1101 10:18:26.568855    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-cmzcq" podStartSLOduration=2.046608288 podCreationTimestamp="2025-11-01 10:18:22 +0000 UTC" firstStartedPulling="2025-11-01 10:18:23.17491993 +0000 UTC m=+12.790133544" lastFinishedPulling="2025-11-01 10:18:25.697096438 +0000 UTC m=+15.312310048" observedRunningTime="2025-11-01 10:18:26.5684595 +0000 UTC m=+16.183673150" watchObservedRunningTime="2025-11-01 10:18:26.568784792 +0000 UTC m=+16.183998413"
	Nov 01 10:18:36 old-k8s-version-556573 kubelet[1387]: I1101 10:18:36.317812    1387 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 01 10:18:36 old-k8s-version-556573 kubelet[1387]: I1101 10:18:36.338047    1387 topology_manager.go:215] "Topology Admit Handler" podUID="5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a" podNamespace="kube-system" podName="coredns-5dd5756b68-cprx9"
	Nov 01 10:18:36 old-k8s-version-556573 kubelet[1387]: I1101 10:18:36.338793    1387 topology_manager.go:215] "Topology Admit Handler" podUID="000bb166-71a6-4e7a-b710-d5502eba8fdc" podNamespace="kube-system" podName="storage-provisioner"
	Nov 01 10:18:36 old-k8s-version-556573 kubelet[1387]: I1101 10:18:36.362710    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww5tl\" (UniqueName: \"kubernetes.io/projected/5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a-kube-api-access-ww5tl\") pod \"coredns-5dd5756b68-cprx9\" (UID: \"5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a\") " pod="kube-system/coredns-5dd5756b68-cprx9"
	Nov 01 10:18:36 old-k8s-version-556573 kubelet[1387]: I1101 10:18:36.362768    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/000bb166-71a6-4e7a-b710-d5502eba8fdc-tmp\") pod \"storage-provisioner\" (UID: \"000bb166-71a6-4e7a-b710-d5502eba8fdc\") " pod="kube-system/storage-provisioner"
	Nov 01 10:18:36 old-k8s-version-556573 kubelet[1387]: I1101 10:18:36.362827    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqzh9\" (UniqueName: \"kubernetes.io/projected/000bb166-71a6-4e7a-b710-d5502eba8fdc-kube-api-access-kqzh9\") pod \"storage-provisioner\" (UID: \"000bb166-71a6-4e7a-b710-d5502eba8fdc\") " pod="kube-system/storage-provisioner"
	Nov 01 10:18:36 old-k8s-version-556573 kubelet[1387]: I1101 10:18:36.362993    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a-config-volume\") pod \"coredns-5dd5756b68-cprx9\" (UID: \"5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a\") " pod="kube-system/coredns-5dd5756b68-cprx9"
	Nov 01 10:18:37 old-k8s-version-556573 kubelet[1387]: I1101 10:18:37.591479    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.591421206 podCreationTimestamp="2025-11-01 10:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:18:37.591114043 +0000 UTC m=+27.206327664" watchObservedRunningTime="2025-11-01 10:18:37.591421206 +0000 UTC m=+27.206634822"
	Nov 01 10:18:37 old-k8s-version-556573 kubelet[1387]: I1101 10:18:37.601283    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-cprx9" podStartSLOduration=15.601223981 podCreationTimestamp="2025-11-01 10:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:18:37.601195306 +0000 UTC m=+27.216408926" watchObservedRunningTime="2025-11-01 10:18:37.601223981 +0000 UTC m=+27.216437600"
	Nov 01 10:18:39 old-k8s-version-556573 kubelet[1387]: I1101 10:18:39.657804    1387 topology_manager.go:215] "Topology Admit Handler" podUID="44ef04e3-c9bd-4265-88b9-680b1e522491" podNamespace="default" podName="busybox"
	Nov 01 10:18:39 old-k8s-version-556573 kubelet[1387]: I1101 10:18:39.683192    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmc7p\" (UniqueName: \"kubernetes.io/projected/44ef04e3-c9bd-4265-88b9-680b1e522491-kube-api-access-qmc7p\") pod \"busybox\" (UID: \"44ef04e3-c9bd-4265-88b9-680b1e522491\") " pod="default/busybox"
	Nov 01 10:18:42 old-k8s-version-556573 kubelet[1387]: I1101 10:18:42.604682    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.479769766 podCreationTimestamp="2025-11-01 10:18:39 +0000 UTC" firstStartedPulling="2025-11-01 10:18:39.977267095 +0000 UTC m=+29.592480698" lastFinishedPulling="2025-11-01 10:18:42.102128887 +0000 UTC m=+31.717342496" observedRunningTime="2025-11-01 10:18:42.60448751 +0000 UTC m=+32.219701130" watchObservedRunningTime="2025-11-01 10:18:42.604631564 +0000 UTC m=+32.219845185"
	
	
	==> storage-provisioner [33c9542b75e2711f7a85bc1cb63cdd550af760051ddcc8ea9e26e4e2f36a575a] <==
	I1101 10:18:36.701378       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:18:36.711788       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:18:36.711875       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 10:18:36.720386       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:18:36.720581       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-556573_2d4722ab-ab2a-4514-8b3e-647662c7f3a4!
	I1101 10:18:36.720538       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa58e27b-5340-4f47-971d-25a668ca76a2", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-556573_2d4722ab-ab2a-4514-8b3e-647662c7f3a4 became leader
	I1101 10:18:36.820687       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-556573_2d4722ab-ab2a-4514-8b3e-647662c7f3a4!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556573 -n old-k8s-version-556573
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-556573 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-680879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-680879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (255.64272ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:18:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-680879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-680879 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-680879 describe deploy/metrics-server -n kube-system: exit status 1 (58.344292ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-680879 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-680879
helpers_test.go:243: (dbg) docker inspect no-preload-680879:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48",
	        "Created": "2025-11-01T10:17:55.281485116Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 741411,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:17:55.318476102Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48/hostname",
	        "HostsPath": "/var/lib/docker/containers/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48/hosts",
	        "LogPath": "/var/lib/docker/containers/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48-json.log",
	        "Name": "/no-preload-680879",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-680879:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-680879",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48",
	                "LowerDir": "/var/lib/docker/overlay2/851744e87e484e042cd1c2bc342874a85acae0c6d3effc243aa6ce3e70fb73e1-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/851744e87e484e042cd1c2bc342874a85acae0c6d3effc243aa6ce3e70fb73e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/851744e87e484e042cd1c2bc342874a85acae0c6d3effc243aa6ce3e70fb73e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/851744e87e484e042cd1c2bc342874a85acae0c6d3effc243aa6ce3e70fb73e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-680879",
	                "Source": "/var/lib/docker/volumes/no-preload-680879/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-680879",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-680879",
	                "name.minikube.sigs.k8s.io": "no-preload-680879",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "26b4057a1df9fb0f0e6c1a2b7fd1d0a686cc47538070558c18584c01a60a6be2",
	            "SandboxKey": "/var/run/docker/netns/26b4057a1df9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-680879": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:1d:2d:1d:fd:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "11522e762cf9612c2344c4fb5a0996d332b23497f30d211d4b6878b748af077f",
	                    "EndpointID": "9676acc973f098c2bdb196fa30838287db34029e77039fa94824301636e3aa21",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-680879",
	                        "bdead49b30b3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680879 -n no-preload-680879
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-680879 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-680879 logs -n 25: (1.09916153s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-456743 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-456743             │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p cilium-456743 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-456743             │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p cilium-456743 sudo crio config                                                                                                                                                                                                             │ cilium-456743             │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ delete  │ -p cilium-456743                                                                                                                                                                                                                              │ cilium-456743             │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-949166 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ ssh     │ cert-options-278823 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-278823       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ ssh     │ -p cert-options-278823 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-278823       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ delete  │ -p cert-options-278823                                                                                                                                                                                                                        │ cert-options-278823       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p force-systemd-flag-767379 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ delete  │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p NoKubernetes-194729 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ stop    │ -p kubernetes-upgrade-949166                                                                                                                                                                                                                  │ kubernetes-upgrade-949166 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-949166 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p NoKubernetes-194729 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ stop    │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p NoKubernetes-194729 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ ssh     │ -p NoKubernetes-194729 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ delete  │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:18 UTC │
	│ ssh     │ force-systemd-flag-767379 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ delete  │ -p force-systemd-flag-767379                                                                                                                                                                                                                  │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-556573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ stop    │ -p old-k8s-version-556573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-680879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:17:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:17:54.329680  740314 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:17:54.329810  740314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:17:54.329819  740314 out.go:374] Setting ErrFile to fd 2...
	I1101 10:17:54.329823  740314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:17:54.330082  740314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:17:54.330569  740314 out.go:368] Setting JSON to false
	I1101 10:17:54.332514  740314 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10811,"bootTime":1761981463,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:17:54.332630  740314 start.go:143] virtualization: kvm guest
	I1101 10:17:54.334427  740314 out.go:179] * [no-preload-680879] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:17:54.335421  740314 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:17:54.335471  740314 notify.go:221] Checking for updates...
	I1101 10:17:54.337178  740314 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:17:54.341595  740314 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:17:54.342594  740314 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:17:54.343504  740314 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:17:54.344372  740314 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:17:54.345806  740314 config.go:182] Loaded profile config "cert-expiration-577441": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:17:54.345947  740314 config.go:182] Loaded profile config "kubernetes-upgrade-949166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:17:54.346056  740314 config.go:182] Loaded profile config "old-k8s-version-556573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:17:54.346150  740314 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:17:54.371822  740314 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:17:54.371998  740314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:17:54.442239  740314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:17:54.431754685 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:17:54.442348  740314 docker.go:319] overlay module found
	I1101 10:17:54.443746  740314 out.go:179] * Using the docker driver based on user configuration
	I1101 10:17:54.444666  740314 start.go:309] selected driver: docker
	I1101 10:17:54.444683  740314 start.go:930] validating driver "docker" against <nil>
	I1101 10:17:54.444698  740314 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:17:54.445597  740314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:17:54.510488  740314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-01 10:17:54.499507758 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:17:54.510818  740314 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:17:54.511105  740314 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:17:54.512703  740314 out.go:179] * Using Docker driver with root privileges
	I1101 10:17:54.513691  740314 cni.go:84] Creating CNI manager for ""
	I1101 10:17:54.513784  740314 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:17:54.513800  740314 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:17:54.513888  740314 start.go:353] cluster config:
	{Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:17:54.516003  740314 out.go:179] * Starting "no-preload-680879" primary control-plane node in "no-preload-680879" cluster
	I1101 10:17:54.519217  740314 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:17:54.520287  740314 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:17:54.521185  740314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:17:54.521273  740314 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:17:54.521323  740314 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/config.json ...
	I1101 10:17:54.521368  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/config.json: {Name:mkda05d903eb5a2c45b9b0342753da0683264af7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:54.521500  740314 cache.go:107] acquiring lock: {Name:mk54c640473c09dfff1239ead2dd2d51481a015a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521544  740314 cache.go:107] acquiring lock: {Name:mkf19fdae2c3486652a390b24771bb4742a08787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521607  740314 cache.go:107] acquiring lock: {Name:mke846f8ed0eae3f666a2c55755500ad865ceb9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521625  740314 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:54.521622  740314 cache.go:107] acquiring lock: {Name:mke53a0d558f57413c985e8c7d551691237ca10b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521685  740314 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:54.521720  740314 cache.go:107] acquiring lock: {Name:mka96111944f8dc8ebfdcd94de79dafd069ca1d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521759  740314 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:54.521735  740314 cache.go:107] acquiring lock: {Name:mkcd303cc659630879e706aba8fe46f709be28e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521737  740314 cache.go:107] acquiring lock: {Name:mk1c05d679d90243f04dc9223673738f53287a15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.521789  740314 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:54.521798  740314 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:54.521497  740314 cache.go:107] acquiring lock: {Name:mke74377eb8e8f0a2186d46bf4bdde02a944c052 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.522016  740314 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1101 10:17:54.522041  740314 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 10:17:54.522053  740314 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 576.984µs
	I1101 10:17:54.522064  740314 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 10:17:54.522126  740314 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:54.523285  740314 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:54.523384  740314 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:54.523290  740314 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1101 10:17:54.523291  740314 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:54.523292  740314 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:54.523499  740314 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:54.523434  740314 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:54.545323  740314 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:17:54.545353  740314 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:17:54.545369  740314 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:17:54.545411  740314 start.go:360] acquireMachinesLock for no-preload-680879: {Name:mkb2bd3a5c4fc957e021ade32b7982a68330a2bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:17:54.545543  740314 start.go:364] duration metric: took 106.867µs to acquireMachinesLock for "no-preload-680879"
	I1101 10:17:54.545576  740314 start.go:93] Provisioning new machine with config: &{Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:17:54.545676  740314 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:17:54.013639  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:17:54.013707  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:17:54.208676  738963 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-556573:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.490403699s)
	I1101 10:17:54.208718  738963 kic.go:203] duration metric: took 4.490574402s to extract preloaded images to volume ...
	W1101 10:17:54.208871  738963 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 10:17:54.208914  738963 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 10:17:54.208967  738963 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:17:54.273343  738963 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-556573 --name old-k8s-version-556573 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-556573 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-556573 --network old-k8s-version-556573 --ip 192.168.94.2 --volume old-k8s-version-556573:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:17:54.580571  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Running}}
	I1101 10:17:54.601970  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:17:54.625681  738963 cli_runner.go:164] Run: docker exec old-k8s-version-556573 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:17:54.676929  738963 oci.go:144] the created container "old-k8s-version-556573" has a running status.
	I1101 10:17:54.676987  738963 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa...
	I1101 10:17:55.057809  738963 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:17:55.095198  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:17:55.116623  738963 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:17:55.116650  738963 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-556573 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:17:55.165567  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:17:55.187143  738963 machine.go:94] provisionDockerMachine start ...
	I1101 10:17:55.187250  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:55.208309  738963 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:55.208652  738963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1101 10:17:55.208667  738963 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:17:55.370206  738963 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556573
	
	I1101 10:17:55.370240  738963 ubuntu.go:182] provisioning hostname "old-k8s-version-556573"
	I1101 10:17:55.370331  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:55.396282  738963 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:55.396830  738963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1101 10:17:55.396877  738963 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-556573 && echo "old-k8s-version-556573" | sudo tee /etc/hostname
	I1101 10:17:55.563124  738963 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556573
	
	I1101 10:17:55.563208  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:55.584571  738963 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:55.584864  738963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1101 10:17:55.584891  738963 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-556573' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-556573/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-556573' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:17:55.736331  738963 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:17:55.736363  738963 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:17:55.736390  738963 ubuntu.go:190] setting up certificates
	I1101 10:17:55.736405  738963 provision.go:84] configureAuth start
	I1101 10:17:55.736468  738963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-556573
	I1101 10:17:55.756180  738963 provision.go:143] copyHostCerts
	I1101 10:17:55.756257  738963 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:17:55.756274  738963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:17:55.756382  738963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:17:55.756517  738963 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:17:55.756532  738963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:17:55.756572  738963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:17:55.756657  738963 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:17:55.756669  738963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:17:55.756719  738963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:17:55.756796  738963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-556573 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-556573]
	I1101 10:17:56.126009  738963 provision.go:177] copyRemoteCerts
	I1101 10:17:56.126086  738963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:17:56.126148  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:56.152270  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:17:56.269687  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:17:56.310682  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 10:17:56.337656  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:17:56.361422  738963 provision.go:87] duration metric: took 624.997549ms to configureAuth
	I1101 10:17:56.361463  738963 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:17:56.361672  738963 config.go:182] Loaded profile config "old-k8s-version-556573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:17:56.361790  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:56.385162  738963 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:56.385532  738963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1101 10:17:56.385561  738963 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:17:56.688357  738963 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:17:56.688383  738963 machine.go:97] duration metric: took 1.501214294s to provisionDockerMachine
	I1101 10:17:56.688395  738963 client.go:176] duration metric: took 7.678945711s to LocalClient.Create
	I1101 10:17:56.688410  738963 start.go:167] duration metric: took 7.679147879s to libmachine.API.Create "old-k8s-version-556573"
	I1101 10:17:56.688425  738963 start.go:293] postStartSetup for "old-k8s-version-556573" (driver="docker")
	I1101 10:17:56.688435  738963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:17:56.688499  738963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:17:56.688538  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:56.707058  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:17:56.811712  738963 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:17:56.818016  738963 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:17:56.818046  738963 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:17:56.818058  738963 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:17:56.818112  738963 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:17:56.818193  738963 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:17:56.818294  738963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:17:56.827020  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:17:56.850579  738963 start.go:296] duration metric: took 162.137964ms for postStartSetup
	I1101 10:17:56.850976  738963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-556573
	I1101 10:17:56.873161  738963 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/config.json ...
	I1101 10:17:56.873459  738963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:17:56.873516  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:56.891802  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:17:56.991369  738963 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:17:56.996386  738963 start.go:128] duration metric: took 7.990475464s to createHost
	I1101 10:17:56.996416  738963 start.go:83] releasing machines lock for "old-k8s-version-556573", held for 7.99063659s
	I1101 10:17:56.996498  738963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-556573
	I1101 10:17:57.015266  738963 ssh_runner.go:195] Run: cat /version.json
	I1101 10:17:57.015332  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:57.015397  738963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:17:57.015477  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:17:57.034043  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:17:57.034509  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:17:57.187648  738963 ssh_runner.go:195] Run: systemctl --version
	I1101 10:17:57.195510  738963 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:17:57.234782  738963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:17:57.239705  738963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:17:57.239772  738963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:17:57.267147  738963 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:17:57.267176  738963 start.go:496] detecting cgroup driver to use...
	I1101 10:17:57.267220  738963 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:17:57.267280  738963 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:17:57.285222  738963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:17:57.298477  738963 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:17:57.298534  738963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:17:57.317234  738963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:17:57.336745  738963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:17:57.421539  738963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:17:57.515217  738963 docker.go:234] disabling docker service ...
	I1101 10:17:57.515296  738963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:17:57.534882  738963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:17:57.548727  738963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:17:57.636169  738963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:17:57.726612  738963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:17:57.740232  738963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:17:57.755975  738963 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 10:17:57.756033  738963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.767047  738963 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:17:57.767122  738963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.777195  738963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.787417  738963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.797339  738963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:17:57.807067  738963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.816832  738963 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.831783  738963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:57.841791  738963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:17:57.850716  738963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:17:57.859911  738963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:17:57.947816  738963 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:17:58.256302  738963 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:17:58.256372  738963 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:17:58.261072  738963 start.go:564] Will wait 60s for crictl version
	I1101 10:17:58.261134  738963 ssh_runner.go:195] Run: which crictl
	I1101 10:17:58.264803  738963 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:17:58.292615  738963 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:17:58.292694  738963 ssh_runner.go:195] Run: crio --version
	I1101 10:17:58.324924  738963 ssh_runner.go:195] Run: crio --version
	I1101 10:17:58.357678  738963 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 10:17:58.358745  738963 cli_runner.go:164] Run: docker network inspect old-k8s-version-556573 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:17:58.377453  738963 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 10:17:58.382358  738963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:17:58.396531  738963 kubeadm.go:884] updating cluster {Name:old-k8s-version-556573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-556573 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:17:58.396716  738963 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:17:58.396787  738963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:17:58.435580  738963 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:17:58.435605  738963 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:17:58.435649  738963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:17:58.464855  738963 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:17:58.464883  738963 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:17:58.464893  738963 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1101 10:17:58.464997  738963 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-556573 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-556573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:17:58.465081  738963 ssh_runner.go:195] Run: crio config
	I1101 10:17:58.520068  738963 cni.go:84] Creating CNI manager for ""
	I1101 10:17:58.520093  738963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:17:58.520111  738963 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:17:58.520135  738963 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-556573 NodeName:old-k8s-version-556573 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:17:58.520324  738963 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-556573"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:17:58.520383  738963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 10:17:58.529452  738963 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:17:58.529530  738963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:17:58.538346  738963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 10:17:58.552569  738963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:17:58.569689  738963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1101 10:17:58.584988  738963 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:17:58.588925  738963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:17:58.600152  738963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:17:58.688530  738963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:17:58.711925  738963 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573 for IP: 192.168.94.2
	I1101 10:17:58.711957  738963 certs.go:195] generating shared ca certs ...
	I1101 10:17:58.711989  738963 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:58.712161  738963 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:17:58.712217  738963 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:17:58.712230  738963 certs.go:257] generating profile certs ...
	I1101 10:17:58.712299  738963 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.key
	I1101 10:17:58.712316  738963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt with IP's: []
	I1101 10:17:54.548181  740314 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:17:54.548448  740314 start.go:159] libmachine.API.Create for "no-preload-680879" (driver="docker")
	I1101 10:17:54.548503  740314 client.go:173] LocalClient.Create starting
	I1101 10:17:54.548566  740314 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem
	I1101 10:17:54.548613  740314 main.go:143] libmachine: Decoding PEM data...
	I1101 10:17:54.548646  740314 main.go:143] libmachine: Parsing certificate...
	I1101 10:17:54.548730  740314 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem
	I1101 10:17:54.548757  740314 main.go:143] libmachine: Decoding PEM data...
	I1101 10:17:54.548785  740314 main.go:143] libmachine: Parsing certificate...
	I1101 10:17:54.549266  740314 cli_runner.go:164] Run: docker network inspect no-preload-680879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:17:54.567956  740314 cli_runner.go:211] docker network inspect no-preload-680879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:17:54.568065  740314 network_create.go:284] running [docker network inspect no-preload-680879] to gather additional debugging logs...
	I1101 10:17:54.568083  740314 cli_runner.go:164] Run: docker network inspect no-preload-680879
	W1101 10:17:54.587569  740314 cli_runner.go:211] docker network inspect no-preload-680879 returned with exit code 1
	I1101 10:17:54.587597  740314 network_create.go:287] error running [docker network inspect no-preload-680879]: docker network inspect no-preload-680879: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-680879 not found
	I1101 10:17:54.587611  740314 network_create.go:289] output of [docker network inspect no-preload-680879]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-680879 not found
	
	** /stderr **
	I1101 10:17:54.587730  740314 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:17:54.608251  740314 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-db3052bfa0e7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:6a:af:78:80:46} reservation:<nil>}
	I1101 10:17:54.609244  740314 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-99d2741e1e59 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:99:ce:91:38:1c} reservation:<nil>}
	I1101 10:17:54.610099  740314 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a696a61d1319 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:f0:66:2c:aa:f2} reservation:<nil>}
	I1101 10:17:54.610614  740314 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d8ebd2dfecb8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1e:d8:5a:bb:d5:46} reservation:<nil>}
	I1101 10:17:54.611489  740314 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00244b380}
	I1101 10:17:54.611524  740314 network_create.go:124] attempt to create docker network no-preload-680879 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 10:17:54.611578  740314 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-680879 no-preload-680879
	I1101 10:17:54.680988  740314 network_create.go:108] docker network no-preload-680879 192.168.85.0/24 created
	I1101 10:17:54.681029  740314 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-680879" container
	I1101 10:17:54.681103  740314 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:17:54.700896  740314 cli_runner.go:164] Run: docker volume create no-preload-680879 --label name.minikube.sigs.k8s.io=no-preload-680879 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:17:54.702737  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1101 10:17:54.722708  740314 oci.go:103] Successfully created a docker volume no-preload-680879
	I1101 10:17:54.722816  740314 cli_runner.go:164] Run: docker run --rm --name no-preload-680879-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-680879 --entrypoint /usr/bin/test -v no-preload-680879:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:17:54.722882  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1101 10:17:54.728568  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1101 10:17:54.746598  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1101 10:17:54.770083  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1101 10:17:54.837135  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1101 10:17:54.853925  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1101 10:17:54.853956  740314 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 332.276568ms
	I1101 10:17:54.853975  740314 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1101 10:17:54.854387  740314 cache.go:162] opening:  /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1101 10:17:55.193234  740314 oci.go:107] Successfully prepared a docker volume no-preload-680879
	I1101 10:17:55.193274  740314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1101 10:17:55.193371  740314 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 10:17:55.193398  740314 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 10:17:55.193455  740314 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:17:55.261204  740314 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-680879 --name no-preload-680879 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-680879 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-680879 --network no-preload-680879 --ip 192.168.85.2 --volume no-preload-680879:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:17:55.433717  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1101 10:17:55.433746  740314 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 912.265964ms
	I1101 10:17:55.433761  740314 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1101 10:17:55.571376  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Running}}
	I1101 10:17:55.592546  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:17:55.612792  740314 cli_runner.go:164] Run: docker exec no-preload-680879 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:17:55.663201  740314 oci.go:144] the created container "no-preload-680879" has a running status.
	I1101 10:17:55.663244  740314 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa...
	I1101 10:17:56.339008  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1101 10:17:56.339041  740314 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.817357447s
	I1101 10:17:56.339064  740314 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1101 10:17:56.353273  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1101 10:17:56.353299  740314 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.831581009s
	I1101 10:17:56.353313  740314 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1101 10:17:56.455888  740314 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:17:56.487344  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:17:56.512170  740314 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:17:56.512194  740314 kic_runner.go:114] Args: [docker exec --privileged no-preload-680879 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:17:56.527585  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1101 10:17:56.527618  740314 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 2.006015881s
	I1101 10:17:56.527633  740314 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1101 10:17:56.566801  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:17:56.588668  740314 machine.go:94] provisionDockerMachine start ...
	I1101 10:17:56.588764  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:56.610299  740314 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:56.610586  740314 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1101 10:17:56.610601  740314 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:17:56.620626  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1101 10:17:56.620660  740314 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 2.099116432s
	I1101 10:17:56.620679  740314 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1101 10:17:56.759921  740314 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-680879
	
	I1101 10:17:56.759958  740314 ubuntu.go:182] provisioning hostname "no-preload-680879"
	I1101 10:17:56.760025  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:56.781024  740314 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:56.781597  740314 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1101 10:17:56.781625  740314 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-680879 && echo "no-preload-680879" | sudo tee /etc/hostname
	I1101 10:17:56.871686  740314 cache.go:157] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1101 10:17:56.871718  740314 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.350192312s
	I1101 10:17:56.871734  740314 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1101 10:17:56.871757  740314 cache.go:87] Successfully saved all images to host disk.
	I1101 10:17:56.939360  740314 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-680879
	
	I1101 10:17:56.939455  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:56.957720  740314 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:56.957974  740314 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1101 10:17:56.957993  740314 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-680879' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-680879/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-680879' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:17:57.101866  740314 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:17:57.101908  740314 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:17:57.101930  740314 ubuntu.go:190] setting up certificates
	I1101 10:17:57.101943  740314 provision.go:84] configureAuth start
	I1101 10:17:57.102011  740314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-680879
	I1101 10:17:57.119619  740314 provision.go:143] copyHostCerts
	I1101 10:17:57.119682  740314 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:17:57.119692  740314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:17:57.119759  740314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:17:57.119894  740314 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:17:57.119904  740314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:17:57.119936  740314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:17:57.120058  740314 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:17:57.120070  740314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:17:57.120096  740314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:17:57.120152  740314 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.no-preload-680879 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-680879]
	I1101 10:17:57.191661  740314 provision.go:177] copyRemoteCerts
	I1101 10:17:57.191731  740314 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:17:57.191794  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:57.210790  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:17:57.315284  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:17:57.336315  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:17:57.355800  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:17:57.379678  740314 provision.go:87] duration metric: took 277.720039ms to configureAuth
	I1101 10:17:57.379711  740314 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:17:57.379936  740314 config.go:182] Loaded profile config "no-preload-680879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:17:57.380129  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:57.399271  740314 main.go:143] libmachine: Using SSH client type: native
	I1101 10:17:57.399495  740314 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1101 10:17:57.399513  740314 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:17:57.672306  740314 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:17:57.672343  740314 machine.go:97] duration metric: took 1.083651161s to provisionDockerMachine
	I1101 10:17:57.672358  740314 client.go:176] duration metric: took 3.123842795s to LocalClient.Create
	I1101 10:17:57.672375  740314 start.go:167] duration metric: took 3.123928426s to libmachine.API.Create "no-preload-680879"
	I1101 10:17:57.672386  740314 start.go:293] postStartSetup for "no-preload-680879" (driver="docker")
	I1101 10:17:57.672407  740314 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:17:57.672475  740314 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:17:57.672524  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:57.693139  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:17:57.799034  740314 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:17:57.802797  740314 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:17:57.802832  740314 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:17:57.802860  740314 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:17:57.802922  740314 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:17:57.803020  740314 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:17:57.803151  740314 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:17:57.812099  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:17:57.833816  740314 start.go:296] duration metric: took 161.404788ms for postStartSetup
	I1101 10:17:57.834255  740314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-680879
	I1101 10:17:57.853437  740314 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/config.json ...
	I1101 10:17:57.853724  740314 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:17:57.853780  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:57.874008  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:17:57.976717  740314 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:17:57.982245  740314 start.go:128] duration metric: took 3.436549965s to createHost
	I1101 10:17:57.982282  740314 start.go:83] releasing machines lock for "no-preload-680879", held for 3.436721676s
	I1101 10:17:57.982356  740314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-680879
	I1101 10:17:58.000977  740314 ssh_runner.go:195] Run: cat /version.json
	I1101 10:17:58.001068  740314 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:17:58.001089  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:58.001139  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:17:58.020529  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:17:58.020758  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:17:58.205927  740314 ssh_runner.go:195] Run: systemctl --version
	I1101 10:17:58.213192  740314 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:17:58.253117  740314 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:17:58.258886  740314 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:17:58.258962  740314 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:17:58.288803  740314 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:17:58.288830  740314 start.go:496] detecting cgroup driver to use...
	I1101 10:17:58.288893  740314 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:17:58.288941  740314 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:17:58.307356  740314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:17:58.322675  740314 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:17:58.322736  740314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:17:58.341157  740314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:17:58.360947  740314 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:17:58.456227  740314 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:17:58.561057  740314 docker.go:234] disabling docker service ...
	I1101 10:17:58.561131  740314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:17:58.582658  740314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:17:58.597232  740314 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:17:58.695614  740314 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:17:58.793168  740314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:17:58.807256  740314 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:17:58.823260  740314 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:17:58.823330  740314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.834779  740314 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:17:58.834884  740314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.845319  740314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.855201  740314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.864874  740314 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:17:58.873856  740314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.883617  740314 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.899043  740314 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:17:58.908700  740314 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:17:58.917085  740314 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:17:58.925384  740314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:17:59.008235  740314 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:17:59.118729  740314 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:17:59.118806  740314 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:17:59.123077  740314 start.go:564] Will wait 60s for crictl version
	I1101 10:17:59.123150  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.127128  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:17:59.155569  740314 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:17:59.155656  740314 ssh_runner.go:195] Run: crio --version
	I1101 10:17:59.186953  740314 ssh_runner.go:195] Run: crio --version
	I1101 10:17:59.219966  740314 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:17:59.221021  740314 cli_runner.go:164] Run: docker network inspect no-preload-680879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:17:59.239482  740314 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:17:59.244202  740314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:17:59.255809  740314 kubeadm.go:884] updating cluster {Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:17:59.255946  740314 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:17:59.255980  740314 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:17:59.284438  740314 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 10:17:59.284468  740314 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 10:17:59.284523  740314 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:17:59.284528  740314 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.284563  740314 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1101 10:17:59.284605  740314 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.284618  740314 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.284623  740314 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.284646  740314 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.284603  740314 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.286000  740314 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.286051  740314 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.286080  740314 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.286006  740314 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:17:59.286113  740314 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1101 10:17:59.286129  740314 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.286137  740314 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.286007  740314 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.015929  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:17:59.015970  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:17:59.023848  738963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt ...
	I1101 10:17:59.023883  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: {Name:mk60f4f77d4ab12ba9513b9be0f8dc061ffb192a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:59.024070  738963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.key ...
	I1101 10:17:59.024088  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.key: {Name:mka0dfbc519768f58fceb8fac999651371c9277a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:59.024213  738963 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key.91d3229f
	I1101 10:17:59.024235  738963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt.91d3229f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1101 10:17:59.350123  738963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt.91d3229f ...
	I1101 10:17:59.350152  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt.91d3229f: {Name:mke720aa52c5354bd5eabee42f543e759ac9c73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:59.350361  738963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key.91d3229f ...
	I1101 10:17:59.350383  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key.91d3229f: {Name:mk0fa0fd43be446018f9e7889bd59f3ff8f7bc1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:59.350501  738963 certs.go:382] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt.91d3229f -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt
	I1101 10:17:59.350586  738963 certs.go:386] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key.91d3229f -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key
	I1101 10:17:59.350641  738963 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.key
	I1101 10:17:59.350657  738963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.crt with IP's: []
	I1101 10:17:59.534613  738963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.crt ...
	I1101 10:17:59.534657  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.crt: {Name:mkdfa4ecfaa9cdd60452e28a809d1069cb4a4e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:59.534923  738963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.key ...
	I1101 10:17:59.534997  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.key: {Name:mk356ae409e016efeaed9ce8e67efa99bdf488f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:17:59.535274  738963 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:17:59.535317  738963 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:17:59.535330  738963 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:17:59.535358  738963 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:17:59.535382  738963 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:17:59.535408  738963 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:17:59.535458  738963 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:17:59.536448  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:17:59.566815  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:17:59.592988  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:17:59.620788  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:17:59.648105  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:17:59.679872  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:17:59.699674  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:17:59.720891  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:17:59.740573  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:17:59.762751  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:17:59.782484  738963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:17:59.802771  738963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:17:59.820170  738963 ssh_runner.go:195] Run: openssl version
	I1101 10:17:59.829227  738963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:17:59.840352  738963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:17:59.845423  738963 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:17:59.845512  738963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:17:59.886176  738963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:17:59.898071  738963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:17:59.909228  738963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:17:59.915108  738963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:17:59.915181  738963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:17:59.953104  738963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:17:59.963399  738963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:17:59.974929  738963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:17:59.980231  738963 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:17:59.980305  738963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:18:00.020858  738963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:18:00.033137  738963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:18:00.039112  738963 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:18:00.039182  738963 kubeadm.go:401] StartCluster: {Name:old-k8s-version-556573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-556573 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:18:00.039287  738963 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:18:00.039353  738963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:18:00.083742  738963 cri.go:89] found id: ""
	I1101 10:18:00.083825  738963 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:18:00.106819  738963 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:18:00.117922  738963 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:18:00.117987  738963 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:18:00.129125  738963 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:18:00.129155  738963 kubeadm.go:158] found existing configuration files:
	
	I1101 10:18:00.129216  738963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:18:00.138980  738963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:18:00.139046  738963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:18:00.149281  738963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:18:00.159942  738963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:18:00.160011  738963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:18:00.169889  738963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:18:00.180613  738963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:18:00.180700  738963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:18:00.191183  738963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:18:00.203862  738963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:18:00.203940  738963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:18:00.215749  738963 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:18:00.342016  738963 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 10:18:00.449936  738963 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:17:59.444649  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.447737  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.449432  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.453874  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1101 10:17:59.494793  740314 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1101 10:17:59.494869  740314 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.494930  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.496996  740314 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1101 10:17:59.497037  740314 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.497037  740314 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1101 10:17:59.497071  740314 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.497084  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.497121  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.500444  740314 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1101 10:17:59.500496  740314 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1101 10:17:59.500536  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.500543  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.502918  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.502924  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.506493  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.509637  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.540787  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.540935  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:17:59.544140  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.544238  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.549134  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.558751  740314 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1101 10:17:59.558804  740314 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.559078  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.566504  740314 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1101 10:17:59.566559  740314 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.566613  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.582355  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 10:17:59.582412  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:17:59.582464  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 10:17:59.582490  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 10:17:59.601305  740314 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1101 10:17:59.601346  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.601354  740314 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.601423  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.601443  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:17:59.621556  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1101 10:17:59.621614  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1101 10:17:59.621653  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:17:59.621663  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1101 10:17:59.621704  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:17:59.621713  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 10:17:59.621729  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:17:59.639291  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.639344  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.639377  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.639382  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1101 10:17:59.639398  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1101 10:17:59.639416  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1101 10:17:59.639410  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1101 10:17:59.677939  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1101 10:17:59.678011  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1101 10:17:59.678054  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1101 10:17:59.678172  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1101 10:17:59.695513  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.700932  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 10:17:59.700965  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 10:17:59.860502  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 10:17:59.860551  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1101 10:17:59.860585  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1101 10:17:59.860592  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1101 10:17:59.860691  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:17:59.861074  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1101 10:17:59.861164  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:17:59.920136  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1101 10:17:59.920150  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1101 10:17:59.920201  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1101 10:17:59.920284  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1101 10:17:59.920315  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:17:59.920319  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1101 10:17:59.950783  740314 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1101 10:17:59.950860  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1101 10:17:59.991758  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1101 10:17:59.991816  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1101 10:18:00.189619  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1101 10:18:00.323537  740314 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:18:00.323616  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1101 10:18:00.590359  740314 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:01.700116  740314 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.376469192s)
	I1101 10:18:01.700156  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1101 10:18:01.700174  740314 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.109776162s)
	I1101 10:18:01.700189  740314 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:18:01.700226  740314 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1101 10:18:01.700254  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 10:18:01.700262  740314 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:01.700302  740314 ssh_runner.go:195] Run: which crictl
	I1101 10:18:02.868720  740314 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.168433694s)
	I1101 10:18:02.868737  740314 ssh_runner.go:235] Completed: which crictl: (1.16841485s)
	I1101 10:18:02.868757  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1101 10:18:02.868792  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:02.868792  740314 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:18:02.868860  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 10:18:04.065722  740314 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.196829212s)
	I1101 10:18:04.065758  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1101 10:18:04.065768  740314 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.196948933s)
	I1101 10:18:04.065798  740314 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:18:04.065858  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 10:18:04.065910  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:04.017260  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:04.017338  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:05.432984  740314 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.367095482s)
	I1101 10:18:05.433013  740314 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.367065339s)
	I1101 10:18:05.433088  740314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:05.433020  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1101 10:18:05.433181  740314 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:18:05.433235  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 10:18:05.466257  740314 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 10:18:05.466366  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:18:06.612033  740314 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.178768896s)
	I1101 10:18:06.612063  740314 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.145678163s)
	I1101 10:18:06.612067  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1101 10:18:06.612088  740314 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1101 10:18:06.612101  740314 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:18:06.612114  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1101 10:18:06.612163  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1101 10:18:09.020044  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:09.020094  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:09.141677  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:48880->192.168.103.2:8443: read: connection reset by peer
	I1101 10:18:09.512108  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:09.512610  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:10.562506  738963 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1101 10:18:10.562620  738963 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:18:10.562755  738963 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:18:10.562868  738963 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 10:18:10.562943  738963 kubeadm.go:319] OS: Linux
	I1101 10:18:10.563025  738963 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:18:10.563110  738963 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:18:10.563190  738963 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:18:10.563269  738963 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:18:10.563357  738963 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:18:10.563434  738963 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:18:10.563512  738963 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:18:10.563595  738963 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 10:18:10.563704  738963 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:18:10.563874  738963 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:18:10.564015  738963 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 10:18:10.564107  738963 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:18:10.566128  738963 out.go:252]   - Generating certificates and keys ...
	I1101 10:18:10.566232  738963 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:18:10.566320  738963 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:18:10.566418  738963 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:18:10.566501  738963 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:18:10.566589  738963 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:18:10.566680  738963 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:18:10.566791  738963 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:18:10.567013  738963 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-556573] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 10:18:10.567096  738963 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:18:10.567285  738963 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-556573] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 10:18:10.567380  738963 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:18:10.567475  738963 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:18:10.567546  738963 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:18:10.567626  738963 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:18:10.567708  738963 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:18:10.567799  738963 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:18:10.567915  738963 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:18:10.568011  738963 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:18:10.568135  738963 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:18:10.568225  738963 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:18:10.569340  738963 out.go:252]   - Booting up control plane ...
	I1101 10:18:10.569465  738963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:18:10.569587  738963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:18:10.569683  738963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:18:10.569855  738963 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:18:10.570001  738963 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:18:10.570068  738963 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:18:10.570278  738963 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 10:18:10.570410  738963 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.502593 seconds
	I1101 10:18:10.570557  738963 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:18:10.570730  738963 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:18:10.570809  738963 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:18:10.571102  738963 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-556573 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:18:10.571192  738963 kubeadm.go:319] [bootstrap-token] Using token: a2tmz3.w8jg1dq1lgatlgyo
	I1101 10:18:10.572772  738963 out.go:252]   - Configuring RBAC rules ...
	I1101 10:18:10.572931  738963 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:18:10.573037  738963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:18:10.573204  738963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:18:10.573356  738963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:18:10.573536  738963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:18:10.573675  738963 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:18:10.573828  738963 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:18:10.573900  738963 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:18:10.573953  738963 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:18:10.573968  738963 kubeadm.go:319] 
	I1101 10:18:10.574059  738963 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:18:10.574072  738963 kubeadm.go:319] 
	I1101 10:18:10.574172  738963 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:18:10.574179  738963 kubeadm.go:319] 
	I1101 10:18:10.574235  738963 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:18:10.574337  738963 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:18:10.574421  738963 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:18:10.574430  738963 kubeadm.go:319] 
	I1101 10:18:10.574512  738963 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:18:10.574521  738963 kubeadm.go:319] 
	I1101 10:18:10.574584  738963 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:18:10.574597  738963 kubeadm.go:319] 
	I1101 10:18:10.574675  738963 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:18:10.574779  738963 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:18:10.574915  738963 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:18:10.574926  738963 kubeadm.go:319] 
	I1101 10:18:10.575060  738963 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:18:10.575181  738963 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:18:10.575199  738963 kubeadm.go:319] 
	I1101 10:18:10.575357  738963 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a2tmz3.w8jg1dq1lgatlgyo \
	I1101 10:18:10.575516  738963 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 \
	I1101 10:18:10.575548  738963 kubeadm.go:319] 	--control-plane 
	I1101 10:18:10.575560  738963 kubeadm.go:319] 
	I1101 10:18:10.575686  738963 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:18:10.575695  738963 kubeadm.go:319] 
	I1101 10:18:10.575814  738963 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a2tmz3.w8jg1dq1lgatlgyo \
	I1101 10:18:10.575995  738963 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 
	I1101 10:18:10.576015  738963 cni.go:84] Creating CNI manager for ""
	I1101 10:18:10.576027  738963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:18:10.577402  738963 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:18:10.580112  738963 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:18:10.586359  738963 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1101 10:18:10.586380  738963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:18:10.604279  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:18:11.392799  738963 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:18:11.392957  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:11.392996  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-556573 minikube.k8s.io/updated_at=2025_11_01T10_18_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=old-k8s-version-556573 minikube.k8s.io/primary=true
	I1101 10:18:11.404984  738963 ops.go:34] apiserver oom_adj: -16
	I1101 10:18:11.493929  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:11.994991  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:12.494166  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:12.993981  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:13.494969  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:10.301410  740314 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.689215828s)
	I1101 10:18:10.301445  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1101 10:18:10.301480  740314 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:18:10.301544  740314 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1101 10:18:10.932918  740314 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 10:18:10.932975  740314 cache_images.go:125] Successfully loaded all cached images
	I1101 10:18:10.932982  740314 cache_images.go:94] duration metric: took 11.648498761s to LoadCachedImages
	I1101 10:18:10.933000  740314 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:18:10.933148  740314 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-680879 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:18:10.933315  740314 ssh_runner.go:195] Run: crio config
	I1101 10:18:10.982286  740314 cni.go:84] Creating CNI manager for ""
	I1101 10:18:10.982308  740314 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:18:10.982326  740314 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:18:10.982352  740314 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-680879 NodeName:no-preload-680879 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:18:10.982500  740314 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-680879"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:18:10.982575  740314 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:18:10.991722  740314 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1101 10:18:10.991775  740314 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1101 10:18:11.000719  740314 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1101 10:18:11.000782  740314 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1101 10:18:11.000808  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1101 10:18:11.000829  740314 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1101 10:18:11.005211  740314 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1101 10:18:11.005242  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1101 10:18:12.378760  740314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:18:12.393536  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1101 10:18:12.398283  740314 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1101 10:18:12.398324  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1101 10:18:12.653488  740314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1101 10:18:12.658271  740314 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1101 10:18:12.658314  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1101 10:18:12.835512  740314 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:18:12.844449  740314 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:18:12.858210  740314 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:18:12.874634  740314 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1101 10:18:12.888825  740314 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:18:12.892966  740314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:18:12.903912  740314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:18:12.986457  740314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:18:13.011948  740314 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879 for IP: 192.168.85.2
	I1101 10:18:13.011980  740314 certs.go:195] generating shared ca certs ...
	I1101 10:18:13.012007  740314 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:13.012202  740314 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:18:13.012263  740314 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:18:13.012276  740314 certs.go:257] generating profile certs ...
	I1101 10:18:13.012343  740314 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.key
	I1101 10:18:13.012374  740314 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt with IP's: []
	I1101 10:18:13.195814  740314 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt ...
	I1101 10:18:13.195869  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt: {Name:mk67b702ea5503c66efd1bd87a0c98646d7640ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:13.196068  740314 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.key ...
	I1101 10:18:13.196087  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.key: {Name:mkc60edbc2b1463c81ab8781aca273c413ceaa90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:13.196212  740314 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key.0ccb300d
	I1101 10:18:13.196233  740314 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt.0ccb300d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 10:18:13.484150  740314 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt.0ccb300d ...
	I1101 10:18:13.484193  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt.0ccb300d: {Name:mk661ef05477b162b65c9212fe9778e04d74403d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:13.484407  740314 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key.0ccb300d ...
	I1101 10:18:13.484430  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key.0ccb300d: {Name:mk4a7ae58d6bfc52b3ce47998c0eb69bf2cee6a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:13.484579  740314 certs.go:382] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt.0ccb300d -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt
	I1101 10:18:13.484682  740314 certs.go:386] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key.0ccb300d -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key
	I1101 10:18:13.484767  740314 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.key
	I1101 10:18:13.484791  740314 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.crt with IP's: []
	I1101 10:18:14.224353  740314 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.crt ...
	I1101 10:18:14.224393  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.crt: {Name:mk145b341e88e9e42f976d5f15bd79401a807fe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:14.224644  740314 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.key ...
	I1101 10:18:14.224664  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.key: {Name:mk392520b68e41d3d7e442fe2e4ed6bf585db2eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:14.224921  740314 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:18:14.224974  740314 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:18:14.224991  740314 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:18:14.225022  740314 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:18:14.225052  740314 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:18:14.225086  740314 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:18:14.225143  740314 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:18:14.225905  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:18:14.246222  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:18:14.264983  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:18:14.283717  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:18:14.302280  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:18:14.321258  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:18:10.012047  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:10.012547  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:10.511997  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:13.994533  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:14.494694  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:14.994700  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:15.494749  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:15.994770  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:16.494735  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:16.994962  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:17.494885  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:17.995628  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:18.494068  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:14.340408  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:18:14.359345  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:18:14.377885  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:18:14.398705  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:18:14.417804  740314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:18:14.436902  740314 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:18:14.451161  740314 ssh_runner.go:195] Run: openssl version
	I1101 10:18:14.458076  740314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:18:14.468144  740314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:18:14.472447  740314 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:18:14.472520  740314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:18:14.514250  740314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:18:14.524433  740314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:18:14.534351  740314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:18:14.538753  740314 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:18:14.538819  740314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:18:14.579706  740314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:18:14.589878  740314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:18:14.599447  740314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:18:14.603568  740314 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:18:14.603691  740314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:18:14.640281  740314 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:18:14.649758  740314 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:18:14.653828  740314 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:18:14.653919  740314 kubeadm.go:401] StartCluster: {Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:18:14.654020  740314 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:18:14.654081  740314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:18:14.684951  740314 cri.go:89] found id: ""
	I1101 10:18:14.685028  740314 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:18:14.694025  740314 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:18:14.705300  740314 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:18:14.705358  740314 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:18:14.713961  740314 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:18:14.713981  740314 kubeadm.go:158] found existing configuration files:
	
	I1101 10:18:14.714023  740314 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:18:14.722639  740314 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:18:14.722695  740314 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:18:14.730701  740314 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:18:14.739183  740314 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:18:14.739233  740314 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:18:14.747740  740314 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:18:14.756348  740314 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:18:14.756413  740314 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:18:14.764415  740314 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:18:14.772970  740314 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:18:14.773057  740314 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:18:14.781255  740314 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:18:14.839290  740314 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 10:18:14.896739  740314 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:18:15.513068  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:15.513129  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:18.994935  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:19.494014  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:19.994641  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:20.494597  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:20.994934  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:21.494099  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:21.994075  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:22.494059  738963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:22.578536  738963 kubeadm.go:1114] duration metric: took 11.18565038s to wait for elevateKubeSystemPrivileges
	I1101 10:18:22.578576  738963 kubeadm.go:403] duration metric: took 22.539398327s to StartCluster
	I1101 10:18:22.578602  738963 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:22.578690  738963 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:18:22.579984  738963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:22.580235  738963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:18:22.580246  738963 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:18:22.580338  738963 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:18:22.580452  738963 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-556573"
	I1101 10:18:22.580461  738963 config.go:182] Loaded profile config "old-k8s-version-556573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:18:22.580470  738963 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-556573"
	I1101 10:18:22.580500  738963 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-556573"
	I1101 10:18:22.580474  738963 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-556573"
	I1101 10:18:22.580727  738963 host.go:66] Checking if "old-k8s-version-556573" exists ...
	I1101 10:18:22.580985  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:18:22.581324  738963 out.go:179] * Verifying Kubernetes components...
	I1101 10:18:22.581511  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:18:22.582749  738963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:18:22.610747  738963 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-556573"
	I1101 10:18:22.610809  738963 host.go:66] Checking if "old-k8s-version-556573" exists ...
	I1101 10:18:22.611749  738963 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:18:22.614141  738963 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:22.615053  738963 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:18:22.615085  738963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:18:22.615153  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:18:22.645273  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:18:22.649323  738963 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:18:22.649351  738963 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:18:22.649425  738963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:18:22.675603  738963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:18:22.691971  738963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:18:22.741757  738963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:18:22.773809  738963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:18:22.801962  738963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:18:22.939018  738963 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1101 10:18:22.940339  738963 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-556573" to be "Ready" ...
	I1101 10:18:23.162439  738963 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:18:23.163315  738963 addons.go:515] duration metric: took 582.971716ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:18:23.443141  738963 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-556573" context rescaled to 1 replicas
	I1101 10:18:24.485587  740314 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:18:24.485650  740314 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:18:24.485767  740314 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:18:24.485894  740314 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 10:18:24.485942  740314 kubeadm.go:319] OS: Linux
	I1101 10:18:24.485997  740314 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:18:24.486057  740314 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:18:24.486128  740314 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:18:24.486190  740314 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:18:24.486260  740314 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:18:24.486306  740314 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:18:24.486351  740314 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:18:24.486389  740314 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 10:18:24.486489  740314 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:18:24.486629  740314 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:18:24.486766  740314 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:18:24.486864  740314 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:18:24.488111  740314 out.go:252]   - Generating certificates and keys ...
	I1101 10:18:24.488202  740314 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:18:24.488277  740314 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:18:24.488356  740314 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:18:24.488457  740314 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:18:24.488524  740314 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:18:24.488586  740314 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:18:24.488648  740314 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:18:24.488750  740314 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-680879] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:18:24.488802  740314 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:18:24.488934  740314 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-680879] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:18:24.489028  740314 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:18:24.489135  740314 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:18:24.489215  740314 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:18:24.489273  740314 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:18:24.489318  740314 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:18:24.489396  740314 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:18:24.489486  740314 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:18:24.489547  740314 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:18:24.489594  740314 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:18:24.489686  740314 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:18:24.489774  740314 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:18:24.491008  740314 out.go:252]   - Booting up control plane ...
	I1101 10:18:24.491102  740314 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:18:24.491215  740314 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:18:24.491319  740314 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:18:24.491443  740314 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:18:24.491523  740314 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:18:24.491624  740314 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:18:24.491711  740314 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:18:24.491759  740314 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:18:24.491918  740314 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:18:24.492017  740314 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:18:24.492072  740314 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001607231s
	I1101 10:18:24.492150  740314 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:18:24.492227  740314 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 10:18:24.492303  740314 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:18:24.492370  740314 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:18:24.492447  740314 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.621712454s
	I1101 10:18:24.492514  740314 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.104922609s
	I1101 10:18:24.492575  740314 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001612362s
	I1101 10:18:24.492675  740314 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:18:24.492796  740314 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:18:24.492910  740314 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:18:24.493150  740314 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-680879 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:18:24.493205  740314 kubeadm.go:319] [bootstrap-token] Using token: psgks8.xzghorqz7mq8617s
	I1101 10:18:24.494307  740314 out.go:252]   - Configuring RBAC rules ...
	I1101 10:18:24.494396  740314 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:18:24.494472  740314 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:18:24.494626  740314 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:18:24.494738  740314 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:18:24.494865  740314 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:18:24.494943  740314 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:18:24.495054  740314 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:18:24.495099  740314 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:18:24.495139  740314 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:18:24.495145  740314 kubeadm.go:319] 
	I1101 10:18:24.495195  740314 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:18:24.495201  740314 kubeadm.go:319] 
	I1101 10:18:24.495271  740314 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:18:24.495276  740314 kubeadm.go:319] 
	I1101 10:18:24.495297  740314 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:18:24.495360  740314 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:18:24.495408  740314 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:18:24.495414  740314 kubeadm.go:319] 
	I1101 10:18:24.495467  740314 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:18:24.495473  740314 kubeadm.go:319] 
	I1101 10:18:24.495541  740314 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:18:24.495557  740314 kubeadm.go:319] 
	I1101 10:18:24.495633  740314 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:18:24.495735  740314 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:18:24.495832  740314 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:18:24.495851  740314 kubeadm.go:319] 
	I1101 10:18:24.495967  740314 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:18:24.496041  740314 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:18:24.496051  740314 kubeadm.go:319] 
	I1101 10:18:24.496123  740314 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token psgks8.xzghorqz7mq8617s \
	I1101 10:18:24.496226  740314 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 \
	I1101 10:18:24.496259  740314 kubeadm.go:319] 	--control-plane 
	I1101 10:18:24.496268  740314 kubeadm.go:319] 
	I1101 10:18:24.496356  740314 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:18:24.496363  740314 kubeadm.go:319] 
	I1101 10:18:24.496434  740314 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token psgks8.xzghorqz7mq8617s \
	I1101 10:18:24.496561  740314 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 
	I1101 10:18:24.496580  740314 cni.go:84] Creating CNI manager for ""
	I1101 10:18:24.496589  740314 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:18:24.497695  740314 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:18:20.513640  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:20.513705  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1101 10:18:24.945133  738963 node_ready.go:57] node "old-k8s-version-556573" has "Ready":"False" status (will retry)
	W1101 10:18:27.443607  738963 node_ready.go:57] node "old-k8s-version-556573" has "Ready":"False" status (will retry)
	I1101 10:18:24.498571  740314 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:18:24.503506  740314 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:18:24.503527  740314 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:18:24.518226  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:18:24.784362  740314 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:18:24.784497  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-680879 minikube.k8s.io/updated_at=2025_11_01T10_18_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=no-preload-680879 minikube.k8s.io/primary=true
	I1101 10:18:24.784578  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:24.877234  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:24.877234  740314 ops.go:34] apiserver oom_adj: -16
	I1101 10:18:25.378064  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:25.878239  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:26.377566  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:26.878034  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:27.377406  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:27.877706  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:28.377733  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:28.878109  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:29.377335  740314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:18:29.444963  740314 kubeadm.go:1114] duration metric: took 4.660595599s to wait for elevateKubeSystemPrivileges
	I1101 10:18:29.445008  740314 kubeadm.go:403] duration metric: took 14.791108031s to StartCluster
	I1101 10:18:29.445035  740314 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:29.445122  740314 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:18:29.446569  740314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:18:29.446869  740314 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:18:29.446907  740314 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:18:29.446960  740314 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:18:29.447067  740314 config.go:182] Loaded profile config "no-preload-680879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:18:29.447081  740314 addons.go:70] Setting storage-provisioner=true in profile "no-preload-680879"
	I1101 10:18:29.447099  740314 addons.go:70] Setting default-storageclass=true in profile "no-preload-680879"
	I1101 10:18:29.447132  740314 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-680879"
	I1101 10:18:29.447103  740314 addons.go:239] Setting addon storage-provisioner=true in "no-preload-680879"
	I1101 10:18:29.447269  740314 host.go:66] Checking if "no-preload-680879" exists ...
	I1101 10:18:29.447579  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:18:29.447731  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:18:29.450386  740314 out.go:179] * Verifying Kubernetes components...
	I1101 10:18:29.451973  740314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:18:29.470998  740314 addons.go:239] Setting addon default-storageclass=true in "no-preload-680879"
	I1101 10:18:29.471050  740314 host.go:66] Checking if "no-preload-680879" exists ...
	I1101 10:18:29.471186  740314 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:18:25.514207  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:25.514279  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:29.471534  740314 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:18:29.472271  740314 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:18:29.472292  740314 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:18:29.472361  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:18:29.495730  740314 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:18:29.495764  740314 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:18:29.495853  740314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:18:29.496179  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:18:29.519292  740314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:18:29.542601  740314 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:18:29.596581  740314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:18:29.616447  740314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:18:29.638600  740314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:18:29.720335  740314 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 10:18:29.721743  740314 node_ready.go:35] waiting up to 6m0s for node "no-preload-680879" to be "Ready" ...
	I1101 10:18:29.921096  740314 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1101 10:18:29.943881  738963 node_ready.go:57] node "old-k8s-version-556573" has "Ready":"False" status (will retry)
	W1101 10:18:31.944091  738963 node_ready.go:57] node "old-k8s-version-556573" has "Ready":"False" status (will retry)
	I1101 10:18:29.921890  740314 addons.go:515] duration metric: took 474.941164ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:18:30.224852  740314 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-680879" context rescaled to 1 replicas
	W1101 10:18:31.725627  740314 node_ready.go:57] node "no-preload-680879" has "Ready":"False" status (will retry)
	W1101 10:18:34.225341  740314 node_ready.go:57] node "no-preload-680879" has "Ready":"False" status (will retry)
	I1101 10:18:30.514666  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:30.514725  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:31.687728  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:58220->192.168.103.2:8443: read: connection reset by peer
	I1101 10:18:31.687782  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:31.688197  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:32.011513  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:32.011967  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:32.511614  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:32.512160  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:33.011829  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:33.012321  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:33.512045  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:33.512481  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:34.012263  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:34.012761  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:34.512476  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:34.513005  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	W1101 10:18:34.443797  738963 node_ready.go:57] node "old-k8s-version-556573" has "Ready":"False" status (will retry)
	I1101 10:18:36.443695  738963 node_ready.go:49] node "old-k8s-version-556573" is "Ready"
	I1101 10:18:36.443732  738963 node_ready.go:38] duration metric: took 13.503361146s for node "old-k8s-version-556573" to be "Ready" ...
	I1101 10:18:36.443750  738963 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:18:36.443815  738963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:18:36.456387  738963 api_server.go:72] duration metric: took 13.876100443s to wait for apiserver process to appear ...
	I1101 10:18:36.456422  738963 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:18:36.456456  738963 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 10:18:36.460765  738963 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 10:18:36.461998  738963 api_server.go:141] control plane version: v1.28.0
	I1101 10:18:36.462033  738963 api_server.go:131] duration metric: took 5.60277ms to wait for apiserver health ...
	I1101 10:18:36.462042  738963 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:18:36.465787  738963 system_pods.go:59] 8 kube-system pods found
	I1101 10:18:36.465866  738963 system_pods.go:61] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:36.465882  738963 system_pods.go:61] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running
	I1101 10:18:36.465893  738963 system_pods.go:61] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running
	I1101 10:18:36.465899  738963 system_pods.go:61] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running
	I1101 10:18:36.465909  738963 system_pods.go:61] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running
	I1101 10:18:36.465914  738963 system_pods.go:61] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running
	I1101 10:18:36.465920  738963 system_pods.go:61] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running
	I1101 10:18:36.465930  738963 system_pods.go:61] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:36.465946  738963 system_pods.go:74] duration metric: took 3.896458ms to wait for pod list to return data ...
	I1101 10:18:36.465961  738963 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:18:36.467989  738963 default_sa.go:45] found service account: "default"
	I1101 10:18:36.468011  738963 default_sa.go:55] duration metric: took 2.042477ms for default service account to be created ...
	I1101 10:18:36.468020  738963 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:18:36.471293  738963 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:36.471329  738963 system_pods.go:89] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:36.471338  738963 system_pods.go:89] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running
	I1101 10:18:36.471351  738963 system_pods.go:89] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running
	I1101 10:18:36.471357  738963 system_pods.go:89] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running
	I1101 10:18:36.471363  738963 system_pods.go:89] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running
	I1101 10:18:36.471368  738963 system_pods.go:89] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running
	I1101 10:18:36.471381  738963 system_pods.go:89] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running
	I1101 10:18:36.471393  738963 system_pods.go:89] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:36.471434  738963 retry.go:31] will retry after 192.603663ms: missing components: kube-dns
	I1101 10:18:36.668587  738963 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:36.668642  738963 system_pods.go:89] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:36.668651  738963 system_pods.go:89] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running
	I1101 10:18:36.668659  738963 system_pods.go:89] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running
	I1101 10:18:36.668665  738963 system_pods.go:89] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running
	I1101 10:18:36.668671  738963 system_pods.go:89] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running
	I1101 10:18:36.668676  738963 system_pods.go:89] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running
	I1101 10:18:36.668686  738963 system_pods.go:89] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running
	I1101 10:18:36.668697  738963 system_pods.go:89] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:36.668719  738963 retry.go:31] will retry after 277.22195ms: missing components: kube-dns
	I1101 10:18:36.950586  738963 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:36.950645  738963 system_pods.go:89] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:36.950660  738963 system_pods.go:89] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running
	I1101 10:18:36.950669  738963 system_pods.go:89] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running
	I1101 10:18:36.950675  738963 system_pods.go:89] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running
	I1101 10:18:36.950686  738963 system_pods.go:89] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running
	I1101 10:18:36.950691  738963 system_pods.go:89] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running
	I1101 10:18:36.950695  738963 system_pods.go:89] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running
	I1101 10:18:36.950707  738963 system_pods.go:89] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:36.950727  738963 retry.go:31] will retry after 403.084038ms: missing components: kube-dns
	I1101 10:18:37.357668  738963 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:37.357707  738963 system_pods.go:89] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:37.357714  738963 system_pods.go:89] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running
	I1101 10:18:37.357719  738963 system_pods.go:89] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running
	I1101 10:18:37.357723  738963 system_pods.go:89] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running
	I1101 10:18:37.357728  738963 system_pods.go:89] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running
	I1101 10:18:37.357732  738963 system_pods.go:89] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running
	I1101 10:18:37.357735  738963 system_pods.go:89] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running
	I1101 10:18:37.357739  738963 system_pods.go:89] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:37.357760  738963 retry.go:31] will retry after 462.647878ms: missing components: kube-dns
	I1101 10:18:37.825041  738963 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:37.825078  738963 system_pods.go:89] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Running
	I1101 10:18:37.825086  738963 system_pods.go:89] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running
	I1101 10:18:37.825091  738963 system_pods.go:89] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running
	I1101 10:18:37.825098  738963 system_pods.go:89] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running
	I1101 10:18:37.825104  738963 system_pods.go:89] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running
	I1101 10:18:37.825109  738963 system_pods.go:89] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running
	I1101 10:18:37.825115  738963 system_pods.go:89] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running
	I1101 10:18:37.825121  738963 system_pods.go:89] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Running
	I1101 10:18:37.825133  738963 system_pods.go:126] duration metric: took 1.357105468s to wait for k8s-apps to be running ...
	I1101 10:18:37.825147  738963 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:18:37.825208  738963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:18:37.839137  738963 system_svc.go:56] duration metric: took 13.973146ms WaitForService to wait for kubelet
	I1101 10:18:37.839172  738963 kubeadm.go:587] duration metric: took 15.25889387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:18:37.839201  738963 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:18:37.841954  738963 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:18:37.841985  738963 node_conditions.go:123] node cpu capacity is 8
	I1101 10:18:37.841998  738963 node_conditions.go:105] duration metric: took 2.792159ms to run NodePressure ...
	I1101 10:18:37.842012  738963 start.go:242] waiting for startup goroutines ...
	I1101 10:18:37.842021  738963 start.go:247] waiting for cluster config update ...
	I1101 10:18:37.842035  738963 start.go:256] writing updated cluster config ...
	I1101 10:18:37.842333  738963 ssh_runner.go:195] Run: rm -f paused
	I1101 10:18:37.846351  738963 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:18:37.850801  738963 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-cprx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:37.855924  738963 pod_ready.go:94] pod "coredns-5dd5756b68-cprx9" is "Ready"
	I1101 10:18:37.855951  738963 pod_ready.go:86] duration metric: took 5.12496ms for pod "coredns-5dd5756b68-cprx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:37.858621  738963 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:37.862695  738963 pod_ready.go:94] pod "etcd-old-k8s-version-556573" is "Ready"
	I1101 10:18:37.862716  738963 pod_ready.go:86] duration metric: took 4.071246ms for pod "etcd-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:37.865127  738963 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:37.873200  738963 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-556573" is "Ready"
	I1101 10:18:37.873298  738963 pod_ready.go:86] duration metric: took 8.146998ms for pod "kube-apiserver-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:37.883663  738963 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:38.251129  738963 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-556573" is "Ready"
	I1101 10:18:38.251161  738963 pod_ready.go:86] duration metric: took 367.462146ms for pod "kube-controller-manager-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:38.450774  738963 pod_ready.go:83] waiting for pod "kube-proxy-s9fsm" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:18:36.225430  740314 node_ready.go:57] node "no-preload-680879" has "Ready":"False" status (will retry)
	W1101 10:18:38.225569  740314 node_ready.go:57] node "no-preload-680879" has "Ready":"False" status (will retry)
	I1101 10:18:38.850535  738963 pod_ready.go:94] pod "kube-proxy-s9fsm" is "Ready"
	I1101 10:18:38.850562  738963 pod_ready.go:86] duration metric: took 399.759873ms for pod "kube-proxy-s9fsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:39.051414  738963 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:39.450679  738963 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-556573" is "Ready"
	I1101 10:18:39.450709  738963 pod_ready.go:86] duration metric: took 399.266371ms for pod "kube-scheduler-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:39.450721  738963 pod_ready.go:40] duration metric: took 1.604325628s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:18:39.497046  738963 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1101 10:18:39.498415  738963 out.go:203] 
	W1101 10:18:39.499605  738963 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 10:18:39.500655  738963 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:18:39.502040  738963 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-556573" cluster and "default" namespace by default
	I1101 10:18:35.011496  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:35.012087  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:35.511714  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:35.512227  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:36.011482  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:36.011919  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:36.512471  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:36.512951  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:37.011590  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:37.012079  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:37.511714  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:37.512153  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:38.011797  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:38.012334  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:38.512036  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:38.512501  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:39.011766  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:39.012268  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:39.511921  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:39.512385  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	W1101 10:18:40.725182  740314 node_ready.go:57] node "no-preload-680879" has "Ready":"False" status (will retry)
	I1101 10:18:42.724400  740314 node_ready.go:49] node "no-preload-680879" is "Ready"
	I1101 10:18:42.724437  740314 node_ready.go:38] duration metric: took 13.002662095s for node "no-preload-680879" to be "Ready" ...
	I1101 10:18:42.724457  740314 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:18:42.724527  740314 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:18:42.738162  740314 api_server.go:72] duration metric: took 13.291249668s to wait for apiserver process to appear ...
	I1101 10:18:42.738194  740314 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:18:42.738218  740314 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:18:42.742912  740314 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:18:42.744056  740314 api_server.go:141] control plane version: v1.34.1
	I1101 10:18:42.744088  740314 api_server.go:131] duration metric: took 5.886134ms to wait for apiserver health ...
	I1101 10:18:42.744099  740314 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:18:42.748220  740314 system_pods.go:59] 8 kube-system pods found
	I1101 10:18:42.748258  740314 system_pods.go:61] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:42.748267  740314 system_pods.go:61] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running
	I1101 10:18:42.748275  740314 system_pods.go:61] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running
	I1101 10:18:42.748281  740314 system_pods.go:61] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running
	I1101 10:18:42.748287  740314 system_pods.go:61] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running
	I1101 10:18:42.748294  740314 system_pods.go:61] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running
	I1101 10:18:42.748300  740314 system_pods.go:61] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running
	I1101 10:18:42.748307  740314 system_pods.go:61] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:42.748317  740314 system_pods.go:74] duration metric: took 4.210344ms to wait for pod list to return data ...
	I1101 10:18:42.748327  740314 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:18:42.751008  740314 default_sa.go:45] found service account: "default"
	I1101 10:18:42.751029  740314 default_sa.go:55] duration metric: took 2.695361ms for default service account to be created ...
	I1101 10:18:42.751046  740314 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:18:42.753639  740314 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:42.753665  740314 system_pods.go:89] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:42.753671  740314 system_pods.go:89] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running
	I1101 10:18:42.753677  740314 system_pods.go:89] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running
	I1101 10:18:42.753689  740314 system_pods.go:89] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running
	I1101 10:18:42.753694  740314 system_pods.go:89] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running
	I1101 10:18:42.753698  740314 system_pods.go:89] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running
	I1101 10:18:42.753703  740314 system_pods.go:89] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running
	I1101 10:18:42.753710  740314 system_pods.go:89] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:42.753741  740314 retry.go:31] will retry after 211.09158ms: missing components: kube-dns
	I1101 10:18:42.968806  740314 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:42.968858  740314 system_pods.go:89] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:42.968866  740314 system_pods.go:89] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running
	I1101 10:18:42.968873  740314 system_pods.go:89] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running
	I1101 10:18:42.968877  740314 system_pods.go:89] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running
	I1101 10:18:42.968883  740314 system_pods.go:89] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running
	I1101 10:18:42.968886  740314 system_pods.go:89] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running
	I1101 10:18:42.968890  740314 system_pods.go:89] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running
	I1101 10:18:42.968894  740314 system_pods.go:89] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:42.968914  740314 retry.go:31] will retry after 274.560478ms: missing components: kube-dns
	I1101 10:18:43.248096  740314 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:43.248134  740314 system_pods.go:89] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:43.248140  740314 system_pods.go:89] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running
	I1101 10:18:43.248145  740314 system_pods.go:89] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running
	I1101 10:18:43.248149  740314 system_pods.go:89] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running
	I1101 10:18:43.248152  740314 system_pods.go:89] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running
	I1101 10:18:43.248157  740314 system_pods.go:89] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running
	I1101 10:18:43.248160  740314 system_pods.go:89] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running
	I1101 10:18:43.248165  740314 system_pods.go:89] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:43.248181  740314 retry.go:31] will retry after 293.247064ms: missing components: kube-dns
	I1101 10:18:43.545044  740314 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:43.545077  740314 system_pods.go:89] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:18:43.545082  740314 system_pods.go:89] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running
	I1101 10:18:43.545088  740314 system_pods.go:89] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running
	I1101 10:18:43.545092  740314 system_pods.go:89] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running
	I1101 10:18:43.545097  740314 system_pods.go:89] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running
	I1101 10:18:43.545100  740314 system_pods.go:89] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running
	I1101 10:18:43.545104  740314 system_pods.go:89] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running
	I1101 10:18:43.545108  740314 system_pods.go:89] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:18:43.545126  740314 retry.go:31] will retry after 576.006416ms: missing components: kube-dns
	I1101 10:18:44.125748  740314 system_pods.go:86] 8 kube-system pods found
	I1101 10:18:44.125781  740314 system_pods.go:89] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Running
	I1101 10:18:44.125787  740314 system_pods.go:89] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running
	I1101 10:18:44.125790  740314 system_pods.go:89] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running
	I1101 10:18:44.125794  740314 system_pods.go:89] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running
	I1101 10:18:44.125798  740314 system_pods.go:89] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running
	I1101 10:18:44.125801  740314 system_pods.go:89] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running
	I1101 10:18:44.125804  740314 system_pods.go:89] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running
	I1101 10:18:44.125807  740314 system_pods.go:89] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Running
	I1101 10:18:44.125814  740314 system_pods.go:126] duration metric: took 1.374763735s to wait for k8s-apps to be running ...
	I1101 10:18:44.125822  740314 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:18:44.125905  740314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:18:44.140637  740314 system_svc.go:56] duration metric: took 14.798364ms WaitForService to wait for kubelet
	I1101 10:18:44.140680  740314 kubeadm.go:587] duration metric: took 14.693774339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:18:44.140705  740314 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:18:44.144140  740314 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:18:44.144168  740314 node_conditions.go:123] node cpu capacity is 8
	I1101 10:18:44.144185  740314 node_conditions.go:105] duration metric: took 3.47573ms to run NodePressure ...
	I1101 10:18:44.144199  740314 start.go:242] waiting for startup goroutines ...
	I1101 10:18:44.144207  740314 start.go:247] waiting for cluster config update ...
	I1101 10:18:44.144218  740314 start.go:256] writing updated cluster config ...
	I1101 10:18:44.144512  740314 ssh_runner.go:195] Run: rm -f paused
	I1101 10:18:44.149407  740314 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:18:44.153616  740314 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rh4z7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.158401  740314 pod_ready.go:94] pod "coredns-66bc5c9577-rh4z7" is "Ready"
	I1101 10:18:44.158432  740314 pod_ready.go:86] duration metric: took 4.788284ms for pod "coredns-66bc5c9577-rh4z7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.160661  740314 pod_ready.go:83] waiting for pod "etcd-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.164804  740314 pod_ready.go:94] pod "etcd-no-preload-680879" is "Ready"
	I1101 10:18:44.164832  740314 pod_ready.go:86] duration metric: took 4.144476ms for pod "etcd-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.167110  740314 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.171310  740314 pod_ready.go:94] pod "kube-apiserver-no-preload-680879" is "Ready"
	I1101 10:18:44.171343  740314 pod_ready.go:86] duration metric: took 4.207095ms for pod "kube-apiserver-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.173299  740314 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:40.012410  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:40.012862  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:40.511494  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:40.511911  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:41.012482  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:41.013003  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:41.511510  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:41.512034  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:42.011550  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:42.012069  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:18:42.511712  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:44.553594  740314 pod_ready.go:94] pod "kube-controller-manager-no-preload-680879" is "Ready"
	I1101 10:18:44.553624  740314 pod_ready.go:86] duration metric: took 380.299059ms for pod "kube-controller-manager-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:44.754182  740314 pod_ready.go:83] waiting for pod "kube-proxy-ft2vw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:45.154860  740314 pod_ready.go:94] pod "kube-proxy-ft2vw" is "Ready"
	I1101 10:18:45.154888  740314 pod_ready.go:86] duration metric: took 400.675768ms for pod "kube-proxy-ft2vw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:45.354323  740314 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:45.753827  740314 pod_ready.go:94] pod "kube-scheduler-no-preload-680879" is "Ready"
	I1101 10:18:45.753888  740314 pod_ready.go:86] duration metric: took 399.537997ms for pod "kube-scheduler-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:18:45.753906  740314 pod_ready.go:40] duration metric: took 1.60445952s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:18:45.800059  740314 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:18:45.801658  740314 out.go:179] * Done! kubectl is now configured to use "no-preload-680879" cluster and "default" namespace by default
	I1101 10:18:47.512486  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:47.512541  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:18:52.513185  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:18:52.513263  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:18:52.513339  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:18:52.546807  734517 cri.go:89] found id: "294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:18:52.546862  734517 cri.go:89] found id: "84f2fae7227a7b93b7bc551363bf8ec3dea359a6fdb773c1fb8b71715e04ad92"
	I1101 10:18:52.546871  734517 cri.go:89] found id: ""
	I1101 10:18:52.546882  734517 logs.go:282] 2 containers: [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7 84f2fae7227a7b93b7bc551363bf8ec3dea359a6fdb773c1fb8b71715e04ad92]
	I1101 10:18:52.546959  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:18:52.551706  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:18:52.555760  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:18:52.555872  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:18:52.588829  734517 cri.go:89] found id: ""
	I1101 10:18:52.588889  734517 logs.go:282] 0 containers: []
	W1101 10:18:52.588901  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:18:52.588929  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:18:52.589006  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:18:52.620644  734517 cri.go:89] found id: ""
	I1101 10:18:52.620673  734517 logs.go:282] 0 containers: []
	W1101 10:18:52.620682  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:18:52.620689  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:18:52.620747  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:18:52.654081  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:18:52.654112  734517 cri.go:89] found id: ""
	I1101 10:18:52.654122  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:18:52.654189  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:18:52.658531  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:18:52.658593  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:18:52.688518  734517 cri.go:89] found id: ""
	I1101 10:18:52.688546  734517 logs.go:282] 0 containers: []
	W1101 10:18:52.688557  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:18:52.688566  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:18:52.688625  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:18:52.718348  734517 cri.go:89] found id: "5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed"
	I1101 10:18:52.718377  734517 cri.go:89] found id: ""
	I1101 10:18:52.718389  734517 logs.go:282] 1 containers: [5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed]
	I1101 10:18:52.718455  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:18:52.723264  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:18:52.723340  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:18:52.751703  734517 cri.go:89] found id: ""
	I1101 10:18:52.751736  734517 logs.go:282] 0 containers: []
	W1101 10:18:52.751748  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:18:52.751759  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:18:52.751813  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:18:52.781043  734517 cri.go:89] found id: ""
	I1101 10:18:52.781077  734517 logs.go:282] 0 containers: []
	W1101 10:18:52.781086  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:18:52.781107  734517 logs.go:123] Gathering logs for kube-apiserver [84f2fae7227a7b93b7bc551363bf8ec3dea359a6fdb773c1fb8b71715e04ad92] ...
	I1101 10:18:52.781128  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 84f2fae7227a7b93b7bc551363bf8ec3dea359a6fdb773c1fb8b71715e04ad92"
	I1101 10:18:52.813808  734517 logs.go:123] Gathering logs for kube-controller-manager [5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed] ...
	I1101 10:18:52.813858  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed"
	I1101 10:18:52.842689  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:18:52.842717  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:18:52.882301  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:18:52.882344  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:18:52.945466  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:18:52.945511  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:18:52.965231  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:18:52.965269  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	
	==> CRI-O <==
	Nov 01 10:18:43 no-preload-680879 crio[769]: time="2025-11-01T10:18:43.036152299Z" level=info msg="Starting container: d18d4aa2de00517f84400c1d0aa587e16f033a911ee2592e949a36a0a2f412ea" id=586d3538-3a2c-4f08-9283-16a3643d34ed name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:18:43 no-preload-680879 crio[769]: time="2025-11-01T10:18:43.038028039Z" level=info msg="Started container" PID=2891 containerID=d18d4aa2de00517f84400c1d0aa587e16f033a911ee2592e949a36a0a2f412ea description=kube-system/coredns-66bc5c9577-rh4z7/coredns id=586d3538-3a2c-4f08-9283-16a3643d34ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=70975bb50b7ab8683f05d81977750385f76bb1ba9f8d92a92fbe9f6bbd084d23
	Nov 01 10:18:46 no-preload-680879 crio[769]: time="2025-11-01T10:18:46.268309178Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c7c96ece-ece3-4cd9-a8ec-8f5d8480407c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:18:46 no-preload-680879 crio[769]: time="2025-11-01T10:18:46.268385741Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:18:46 no-preload-680879 crio[769]: time="2025-11-01T10:18:46.273149645Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:eb6596e431eb62dc98f47766fafb8d6bb40b4f785a3101d0391da95a8bcb4556 UID:f829de8a-1e4a-4549-8dea-1e345dc87d58 NetNS:/var/run/netns/9aef059b-0c22-4c0a-a589-c3c4c391b1ff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008851e0}] Aliases:map[]}"
	Nov 01 10:18:46 no-preload-680879 crio[769]: time="2025-11-01T10:18:46.273190575Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:18:46 no-preload-680879 crio[769]: time="2025-11-01T10:18:46.283293868Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:eb6596e431eb62dc98f47766fafb8d6bb40b4f785a3101d0391da95a8bcb4556 UID:f829de8a-1e4a-4549-8dea-1e345dc87d58 NetNS:/var/run/netns/9aef059b-0c22-4c0a-a589-c3c4c391b1ff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008851e0}] Aliases:map[]}"
	Nov 01 10:18:46 no-preload-680879 crio[769]: time="2025-11-01T10:18:46.283429505Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:18:46 no-preload-680879 crio[769]: time="2025-11-01T10:18:46.284328468Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:18:46 no-preload-680879 crio[769]: time="2025-11-01T10:18:46.285523117Z" level=info msg="Ran pod sandbox eb6596e431eb62dc98f47766fafb8d6bb40b4f785a3101d0391da95a8bcb4556 with infra container: default/busybox/POD" id=c7c96ece-ece3-4cd9-a8ec-8f5d8480407c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:18:46 no-preload-680879 crio[769]: time="2025-11-01T10:18:46.28677914Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c148023f-ec56-4aa0-afba-3e89d419d701 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:18:46 no-preload-680879 crio[769]: time="2025-11-01T10:18:46.286928327Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c148023f-ec56-4aa0-afba-3e89d419d701 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:18:46 no-preload-680879 crio[769]: time="2025-11-01T10:18:46.286963357Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c148023f-ec56-4aa0-afba-3e89d419d701 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:18:46 no-preload-680879 crio[769]: time="2025-11-01T10:18:46.287446971Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=17789167-149a-46cd-b98b-a9ed66064189 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:18:46 no-preload-680879 crio[769]: time="2025-11-01T10:18:46.28904049Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:18:48 no-preload-680879 crio[769]: time="2025-11-01T10:18:48.549238921Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=17789167-149a-46cd-b98b-a9ed66064189 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:18:48 no-preload-680879 crio[769]: time="2025-11-01T10:18:48.549996903Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=987eeec2-7e61-45da-b4fb-a8f5cf1a7001 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:18:48 no-preload-680879 crio[769]: time="2025-11-01T10:18:48.551354336Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=42fc27a9-7d3d-4f0e-a877-49cbdec2de25 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:18:48 no-preload-680879 crio[769]: time="2025-11-01T10:18:48.554502207Z" level=info msg="Creating container: default/busybox/busybox" id=908cee34-ebf7-4577-bc33-a9a667c13880 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:18:48 no-preload-680879 crio[769]: time="2025-11-01T10:18:48.554645574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:18:48 no-preload-680879 crio[769]: time="2025-11-01T10:18:48.55842599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:18:48 no-preload-680879 crio[769]: time="2025-11-01T10:18:48.558858807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:18:48 no-preload-680879 crio[769]: time="2025-11-01T10:18:48.585562077Z" level=info msg="Created container afb5912634f982030dd8d9be480af4ecbadfcc7ae6a08bafac123d280e4ed06a: default/busybox/busybox" id=908cee34-ebf7-4577-bc33-a9a667c13880 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:18:48 no-preload-680879 crio[769]: time="2025-11-01T10:18:48.586242431Z" level=info msg="Starting container: afb5912634f982030dd8d9be480af4ecbadfcc7ae6a08bafac123d280e4ed06a" id=e2b3059f-cd82-4846-ba60-0c2476df7cfd name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:18:48 no-preload-680879 crio[769]: time="2025-11-01T10:18:48.588141179Z" level=info msg="Started container" PID=2967 containerID=afb5912634f982030dd8d9be480af4ecbadfcc7ae6a08bafac123d280e4ed06a description=default/busybox/busybox id=e2b3059f-cd82-4846-ba60-0c2476df7cfd name=/runtime.v1.RuntimeService/StartContainer sandboxID=eb6596e431eb62dc98f47766fafb8d6bb40b4f785a3101d0391da95a8bcb4556
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	afb5912634f98       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   eb6596e431eb6       busybox                                     default
	d18d4aa2de005       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   70975bb50b7ab       coredns-66bc5c9577-rh4z7                    kube-system
	95a834634b636       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   8f544524d0553       storage-provisioner                         kube-system
	488d07d2e01fb       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   300cb718f36cb       kindnet-sjzlx                               kube-system
	0d0022be2b5b2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      26 seconds ago      Running             kube-proxy                0                   3f7e708d7c5e6       kube-proxy-ft2vw                            kube-system
	78021612e4909       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      36 seconds ago      Running             kube-scheduler            0                   5da40bb3d833b       kube-scheduler-no-preload-680879            kube-system
	42eeaa350313a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      36 seconds ago      Running             kube-apiserver            0                   63a386078fce6       kube-apiserver-no-preload-680879            kube-system
	fa9432c1d0200       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      36 seconds ago      Running             etcd                      0                   5c76766f70052       etcd-no-preload-680879                      kube-system
	27481c9456f54       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      36 seconds ago      Running             kube-controller-manager   0                   56b8eb00c9eba       kube-controller-manager-no-preload-680879   kube-system
	
	
	==> coredns [d18d4aa2de00517f84400c1d0aa587e16f033a911ee2592e949a36a0a2f412ea] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41570 - 58866 "HINFO IN 2483699256849018115.1807005212510901970. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.041728205s
	
	
	==> describe nodes <==
	Name:               no-preload-680879
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-680879
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=no-preload-680879
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_18_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:18:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-680879
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:18:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:18:54 +0000   Sat, 01 Nov 2025 10:18:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:18:54 +0000   Sat, 01 Nov 2025 10:18:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:18:54 +0000   Sat, 01 Nov 2025 10:18:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:18:54 +0000   Sat, 01 Nov 2025 10:18:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-680879
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                60389b87-92db-45cc-8d8b-f8362e2caec7
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-rh4z7                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-680879                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-sjzlx                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-680879             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-680879    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-ft2vw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-680879             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node no-preload-680879 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node no-preload-680879 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node no-preload-680879 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node no-preload-680879 event: Registered Node no-preload-680879 in Controller
	  Normal  NodeReady                14s   kubelet          Node no-preload-680879 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [fa9432c1d020003ff499fac4523d5d7bf1632dd7b3f604e4e8faa2dbd397b73a] <==
	{"level":"warn","ts":"2025-11-01T10:18:20.589251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:18:20.596372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:18:20.604460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:18:20.611608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:18:20.620147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:18:20.626752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:18:20.645164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:18:20.649214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:18:20.656486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:18:20.715694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:18:21.363702Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.345423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/no-preload-680879\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-01T10:18:21.363765Z","caller":"traceutil/trace.go:172","msg":"trace[1892196775] range","detail":"{range_begin:/registry/csinodes/no-preload-680879; range_end:; response_count:0; response_revision:3; }","duration":"142.42535ms","start":"2025-11-01T10:18:21.221326Z","end":"2025-11-01T10:18:21.363751Z","steps":["trace[1892196775] 'agreement among raft nodes before linearized reading'  (duration: 96.756285ms)","trace[1892196775] 'range keys from in-memory index tree'  (duration: 45.559722ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:18:21.363814Z","caller":"traceutil/trace.go:172","msg":"trace[1762762687] transaction","detail":"{read_only:false; response_revision:4; number_of_response:1; }","duration":"147.615552ms","start":"2025-11-01T10:18:21.216177Z","end":"2025-11-01T10:18:21.363793Z","steps":["trace[1762762687] 'process raft request'  (duration: 101.955687ms)","trace[1762762687] 'compare'  (duration: 45.506387ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:18:21.363944Z","caller":"traceutil/trace.go:172","msg":"trace[449499401] transaction","detail":"{read_only:false; response_revision:5; number_of_response:1; }","duration":"147.696993ms","start":"2025-11-01T10:18:21.216227Z","end":"2025-11-01T10:18:21.363924Z","steps":["trace[449499401] 'process raft request'  (duration: 147.506786ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:18:21.363891Z","caller":"traceutil/trace.go:172","msg":"trace[1783133343] transaction","detail":"{read_only:false; response_revision:6; number_of_response:1; }","duration":"147.641565ms","start":"2025-11-01T10:18:21.216234Z","end":"2025-11-01T10:18:21.363876Z","steps":["trace[1783133343] 'process raft request'  (duration: 147.520539ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:18:21.363983Z","caller":"traceutil/trace.go:172","msg":"trace[212535899] transaction","detail":"{read_only:false; response_revision:8; number_of_response:1; }","duration":"147.64645ms","start":"2025-11-01T10:18:21.216315Z","end":"2025-11-01T10:18:21.363961Z","steps":["trace[212535899] 'process raft request'  (duration: 147.47653ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:18:21.364035Z","caller":"traceutil/trace.go:172","msg":"trace[2146300824] transaction","detail":"{read_only:false; response_revision:13; number_of_response:1; }","duration":"106.853774ms","start":"2025-11-01T10:18:21.257171Z","end":"2025-11-01T10:18:21.364025Z","steps":["trace[2146300824] 'process raft request'  (duration: 106.825095ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:18:21.364078Z","caller":"traceutil/trace.go:172","msg":"trace[1622855034] transaction","detail":"{read_only:false; response_revision:11; number_of_response:1; }","duration":"134.850215ms","start":"2025-11-01T10:18:21.229219Z","end":"2025-11-01T10:18:21.364069Z","steps":["trace[1622855034] 'process raft request'  (duration: 134.686676ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:18:21.364098Z","caller":"traceutil/trace.go:172","msg":"trace[1160884196] transaction","detail":"{read_only:false; response_revision:7; number_of_response:1; }","duration":"147.834957ms","start":"2025-11-01T10:18:21.216255Z","end":"2025-11-01T10:18:21.364090Z","steps":["trace[1160884196] 'process raft request'  (duration: 147.515709ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:18:21.364108Z","caller":"traceutil/trace.go:172","msg":"trace[1687144862] transaction","detail":"{read_only:false; response_revision:10; number_of_response:1; }","duration":"147.611677ms","start":"2025-11-01T10:18:21.216474Z","end":"2025-11-01T10:18:21.364085Z","steps":["trace[1687144862] 'process raft request'  (duration: 147.386392ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:18:21.364134Z","caller":"traceutil/trace.go:172","msg":"trace[844481229] transaction","detail":"{read_only:false; number_of_response:0; response_revision:10; }","duration":"146.101704ms","start":"2025-11-01T10:18:21.218014Z","end":"2025-11-01T10:18:21.364116Z","steps":["trace[844481229] 'process raft request'  (duration: 145.868244ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:18:21.364246Z","caller":"traceutil/trace.go:172","msg":"trace[1085640278] transaction","detail":"{read_only:false; response_revision:12; number_of_response:1; }","duration":"132.900208ms","start":"2025-11-01T10:18:21.231336Z","end":"2025-11-01T10:18:21.364236Z","steps":["trace[1085640278] 'process raft request'  (duration: 132.625328ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:18:21.364334Z","caller":"traceutil/trace.go:172","msg":"trace[176311473] transaction","detail":"{read_only:false; response_revision:9; number_of_response:1; }","duration":"147.875199ms","start":"2025-11-01T10:18:21.216449Z","end":"2025-11-01T10:18:21.364325Z","steps":["trace[176311473] 'process raft request'  (duration: 147.362823ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:18:21.364631Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.211804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-01T10:18:21.364742Z","caller":"traceutil/trace.go:172","msg":"trace[476432557] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:13; }","duration":"109.325516ms","start":"2025-11-01T10:18:21.255402Z","end":"2025-11-01T10:18:21.364727Z","steps":["trace[476432557] 'agreement among raft nodes before linearized reading'  (duration: 108.964017ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:18:56 up  3:01,  0 user,  load average: 4.20, 3.66, 2.77
	Linux no-preload-680879 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [488d07d2e01fb4e6202259703785fb52f90bbbb8d45ea890bdff174dacd35ce7] <==
	I1101 10:18:32.253082       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:18:32.253334       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:18:32.253487       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:18:32.253505       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:18:32.253528       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:18:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:18:32.456931       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:18:32.456981       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:18:32.456999       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:18:32.457169       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:18:32.848353       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:18:32.848394       1 metrics.go:72] Registering metrics
	I1101 10:18:32.848474       1 controller.go:711] "Syncing nftables rules"
	I1101 10:18:42.457689       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:18:42.457765       1 main.go:301] handling current node
	I1101 10:18:52.458935       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:18:52.458975       1 main.go:301] handling current node
	
	
	==> kube-apiserver [42eeaa350313aafdc4a0bae0e6fdff6ef643ec556443486a4ba346d5fb0c5372] <==
	I1101 10:18:21.214740       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:18:21.215395       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:18:21.365745       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:18:21.366011       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:18:21.368027       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:18:21.385104       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:18:21.385390       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:18:22.117881       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:18:22.122343       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:18:22.122356       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:18:22.577167       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:18:22.646794       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:18:22.722829       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:18:22.729591       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 10:18:22.731050       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:18:22.735922       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:18:23.140871       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:18:23.885118       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:18:23.894214       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:18:23.901230       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:18:28.596149       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:18:28.600025       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:18:29.192691       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:18:29.242368       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 10:18:55.054681       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:33454: use of closed network connection
	
	
	==> kube-controller-manager [27481c9456f54ce08989e20f76b07544d3102b02309317b40c252fff6a418dbf] <==
	I1101 10:18:28.140806       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:18:28.140884       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:18:28.140910       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:18:28.140929       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:18:28.140931       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:18:28.140911       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:18:28.140978       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:18:28.141068       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:18:28.141091       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:18:28.141126       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:18:28.141181       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:18:28.141873       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:18:28.141883       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:18:28.142337       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:18:28.143596       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:18:28.144811       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:18:28.145945       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:18:28.145978       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:18:28.146003       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:18:28.146010       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:18:28.146014       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:18:28.153092       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-680879" podCIDRs=["10.244.0.0/24"]
	I1101 10:18:28.153931       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:18:28.163404       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:18:43.123461       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0d0022be2b5b2581bbe7ffdec4d054265b0dcf613f2158b1d6d72184b1737b94] <==
	I1101 10:18:29.686902       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:18:29.752412       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:18:29.853055       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:18:29.853102       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:18:29.853238       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:18:29.872336       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:18:29.872380       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:18:29.878958       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:18:29.879343       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:18:29.879388       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:18:29.880779       1 config.go:200] "Starting service config controller"
	I1101 10:18:29.880872       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:18:29.880887       1 config.go:309] "Starting node config controller"
	I1101 10:18:29.880902       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:18:29.880783       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:18:29.880911       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:18:29.880815       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:18:29.880923       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:18:29.880910       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:18:29.981404       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:18:29.981430       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:18:29.981443       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [78021612e49096e10681e56c9dae1149453eb18befafa0ea6a8cef67ab8cc5e2] <==
	E1101 10:18:21.174915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:18:21.174954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:18:21.175017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:18:21.175017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:18:21.175112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:18:21.175152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:18:21.175161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:18:21.175244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:18:21.175284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:18:21.175333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:18:21.175386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:18:21.175405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:18:21.175473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:18:21.175537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:18:21.175565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:18:21.175640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:18:21.978200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:18:21.994571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:18:22.114403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:18:22.190371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:18:22.242654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:18:22.305885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:18:22.411618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:18:22.455423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 10:18:25.572381       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:18:24 no-preload-680879 kubelet[2283]: E1101 10:18:24.764541    2283 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-680879\" already exists" pod="kube-system/kube-apiserver-no-preload-680879"
	Nov 01 10:18:24 no-preload-680879 kubelet[2283]: I1101 10:18:24.791463    2283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-680879" podStartSLOduration=1.7914387440000001 podStartE2EDuration="1.791438744s" podCreationTimestamp="2025-11-01 10:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:18:24.779986873 +0000 UTC m=+1.132539256" watchObservedRunningTime="2025-11-01 10:18:24.791438744 +0000 UTC m=+1.143991135"
	Nov 01 10:18:24 no-preload-680879 kubelet[2283]: I1101 10:18:24.808352    2283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-680879" podStartSLOduration=1.808331108 podStartE2EDuration="1.808331108s" podCreationTimestamp="2025-11-01 10:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:18:24.793962876 +0000 UTC m=+1.146515286" watchObservedRunningTime="2025-11-01 10:18:24.808331108 +0000 UTC m=+1.160883500"
	Nov 01 10:18:24 no-preload-680879 kubelet[2283]: I1101 10:18:24.808490    2283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-680879" podStartSLOduration=1.808483532 podStartE2EDuration="1.808483532s" podCreationTimestamp="2025-11-01 10:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:18:24.80771681 +0000 UTC m=+1.160269200" watchObservedRunningTime="2025-11-01 10:18:24.808483532 +0000 UTC m=+1.161035942"
	Nov 01 10:18:24 no-preload-680879 kubelet[2283]: I1101 10:18:24.825286    2283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-680879" podStartSLOduration=1.825250886 podStartE2EDuration="1.825250886s" podCreationTimestamp="2025-11-01 10:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:18:24.824704945 +0000 UTC m=+1.177257355" watchObservedRunningTime="2025-11-01 10:18:24.825250886 +0000 UTC m=+1.177803278"
	Nov 01 10:18:28 no-preload-680879 kubelet[2283]: I1101 10:18:28.256533    2283 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:18:28 no-preload-680879 kubelet[2283]: I1101 10:18:28.257195    2283 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:18:29 no-preload-680879 kubelet[2283]: I1101 10:18:29.359775    2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f097a1a9-0797-4a99-bbd5-4a8a8356f82d-xtables-lock\") pod \"kube-proxy-ft2vw\" (UID: \"f097a1a9-0797-4a99-bbd5-4a8a8356f82d\") " pod="kube-system/kube-proxy-ft2vw"
	Nov 01 10:18:29 no-preload-680879 kubelet[2283]: I1101 10:18:29.359888    2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f097a1a9-0797-4a99-bbd5-4a8a8356f82d-lib-modules\") pod \"kube-proxy-ft2vw\" (UID: \"f097a1a9-0797-4a99-bbd5-4a8a8356f82d\") " pod="kube-system/kube-proxy-ft2vw"
	Nov 01 10:18:29 no-preload-680879 kubelet[2283]: I1101 10:18:29.359918    2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2be6e8f4-e62c-4075-b883-b34e1b3c71f4-xtables-lock\") pod \"kindnet-sjzlx\" (UID: \"2be6e8f4-e62c-4075-b883-b34e1b3c71f4\") " pod="kube-system/kindnet-sjzlx"
	Nov 01 10:18:29 no-preload-680879 kubelet[2283]: I1101 10:18:29.359941    2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pcbc\" (UniqueName: \"kubernetes.io/projected/2be6e8f4-e62c-4075-b883-b34e1b3c71f4-kube-api-access-2pcbc\") pod \"kindnet-sjzlx\" (UID: \"2be6e8f4-e62c-4075-b883-b34e1b3c71f4\") " pod="kube-system/kindnet-sjzlx"
	Nov 01 10:18:29 no-preload-680879 kubelet[2283]: I1101 10:18:29.359984    2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f097a1a9-0797-4a99-bbd5-4a8a8356f82d-kube-proxy\") pod \"kube-proxy-ft2vw\" (UID: \"f097a1a9-0797-4a99-bbd5-4a8a8356f82d\") " pod="kube-system/kube-proxy-ft2vw"
	Nov 01 10:18:29 no-preload-680879 kubelet[2283]: I1101 10:18:29.360009    2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62mxt\" (UniqueName: \"kubernetes.io/projected/f097a1a9-0797-4a99-bbd5-4a8a8356f82d-kube-api-access-62mxt\") pod \"kube-proxy-ft2vw\" (UID: \"f097a1a9-0797-4a99-bbd5-4a8a8356f82d\") " pod="kube-system/kube-proxy-ft2vw"
	Nov 01 10:18:29 no-preload-680879 kubelet[2283]: I1101 10:18:29.360039    2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2be6e8f4-e62c-4075-b883-b34e1b3c71f4-cni-cfg\") pod \"kindnet-sjzlx\" (UID: \"2be6e8f4-e62c-4075-b883-b34e1b3c71f4\") " pod="kube-system/kindnet-sjzlx"
	Nov 01 10:18:29 no-preload-680879 kubelet[2283]: I1101 10:18:29.360061    2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2be6e8f4-e62c-4075-b883-b34e1b3c71f4-lib-modules\") pod \"kindnet-sjzlx\" (UID: \"2be6e8f4-e62c-4075-b883-b34e1b3c71f4\") " pod="kube-system/kindnet-sjzlx"
	Nov 01 10:18:30 no-preload-680879 kubelet[2283]: I1101 10:18:30.094632    2283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ft2vw" podStartSLOduration=1.09461259 podStartE2EDuration="1.09461259s" podCreationTimestamp="2025-11-01 10:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:18:29.779340116 +0000 UTC m=+6.131892529" watchObservedRunningTime="2025-11-01 10:18:30.09461259 +0000 UTC m=+6.447164982"
	Nov 01 10:18:32 no-preload-680879 kubelet[2283]: I1101 10:18:32.784800    2283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-sjzlx" podStartSLOduration=1.3302404829999999 podStartE2EDuration="3.784779502s" podCreationTimestamp="2025-11-01 10:18:29 +0000 UTC" firstStartedPulling="2025-11-01 10:18:29.581178134 +0000 UTC m=+5.933730512" lastFinishedPulling="2025-11-01 10:18:32.035717157 +0000 UTC m=+8.388269531" observedRunningTime="2025-11-01 10:18:32.784609568 +0000 UTC m=+9.137161974" watchObservedRunningTime="2025-11-01 10:18:32.784779502 +0000 UTC m=+9.137331893"
	Nov 01 10:18:42 no-preload-680879 kubelet[2283]: I1101 10:18:42.655825    2283 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:18:42 no-preload-680879 kubelet[2283]: I1101 10:18:42.755758    2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d-tmp\") pod \"storage-provisioner\" (UID: \"ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d\") " pod="kube-system/storage-provisioner"
	Nov 01 10:18:42 no-preload-680879 kubelet[2283]: I1101 10:18:42.755802    2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76d75e15-e9dd-4d86-97f2-d24aa8d1e4af-config-volume\") pod \"coredns-66bc5c9577-rh4z7\" (UID: \"76d75e15-e9dd-4d86-97f2-d24aa8d1e4af\") " pod="kube-system/coredns-66bc5c9577-rh4z7"
	Nov 01 10:18:42 no-preload-680879 kubelet[2283]: I1101 10:18:42.755879    2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn9m2\" (UniqueName: \"kubernetes.io/projected/76d75e15-e9dd-4d86-97f2-d24aa8d1e4af-kube-api-access-fn9m2\") pod \"coredns-66bc5c9577-rh4z7\" (UID: \"76d75e15-e9dd-4d86-97f2-d24aa8d1e4af\") " pod="kube-system/coredns-66bc5c9577-rh4z7"
	Nov 01 10:18:42 no-preload-680879 kubelet[2283]: I1101 10:18:42.755932    2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wqh2\" (UniqueName: \"kubernetes.io/projected/ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d-kube-api-access-2wqh2\") pod \"storage-provisioner\" (UID: \"ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d\") " pod="kube-system/storage-provisioner"
	Nov 01 10:18:43 no-preload-680879 kubelet[2283]: I1101 10:18:43.811885    2283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rh4z7" podStartSLOduration=14.811865915 podStartE2EDuration="14.811865915s" podCreationTimestamp="2025-11-01 10:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:18:43.811709787 +0000 UTC m=+20.164262196" watchObservedRunningTime="2025-11-01 10:18:43.811865915 +0000 UTC m=+20.164418307"
	Nov 01 10:18:43 no-preload-680879 kubelet[2283]: I1101 10:18:43.831410    2283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.831385595 podStartE2EDuration="14.831385595s" podCreationTimestamp="2025-11-01 10:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:18:43.820947507 +0000 UTC m=+20.173499922" watchObservedRunningTime="2025-11-01 10:18:43.831385595 +0000 UTC m=+20.183937988"
	Nov 01 10:18:46 no-preload-680879 kubelet[2283]: I1101 10:18:46.076251    2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk8wj\" (UniqueName: \"kubernetes.io/projected/f829de8a-1e4a-4549-8dea-1e345dc87d58-kube-api-access-xk8wj\") pod \"busybox\" (UID: \"f829de8a-1e4a-4549-8dea-1e345dc87d58\") " pod="default/busybox"
	
	
	==> storage-provisioner [95a834634b63688ca92d1ffd1d68cfe92e00dcad275f979b282c52c9622d54bc] <==
	I1101 10:18:43.038564       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:18:43.047390       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:18:43.047480       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:18:43.049813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:43.054874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:18:43.055031       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:18:43.055220       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-680879_88108fa8-b3d6-4218-8d30-79631f1fb407!
	I1101 10:18:43.055189       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6660dd7f-bed9-45cf-892b-1e6435b24faf", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-680879_88108fa8-b3d6-4218-8d30-79631f1fb407 became leader
	W1101 10:18:43.057214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:43.060761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:18:43.156281       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-680879_88108fa8-b3d6-4218-8d30-79631f1fb407!
	W1101 10:18:45.064878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:45.068794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:47.071645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:47.077081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:49.080575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:49.084379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:51.087491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:51.092792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:53.095819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:53.099738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:55.102704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:55.106935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-680879 -n no-preload-680879
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-680879 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-556573 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-556573 --alsologtostderr -v=1: exit status 80 (2.464447932s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-556573 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:20:03.593205  757464 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:20:03.593508  757464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:20:03.593518  757464 out.go:374] Setting ErrFile to fd 2...
	I1101 10:20:03.593523  757464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:20:03.593720  757464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:20:03.594002  757464 out.go:368] Setting JSON to false
	I1101 10:20:03.594047  757464 mustload.go:66] Loading cluster: old-k8s-version-556573
	I1101 10:20:03.594409  757464 config.go:182] Loaded profile config "old-k8s-version-556573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:20:03.594820  757464 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:20:03.613398  757464 host.go:66] Checking if "old-k8s-version-556573" exists ...
	I1101 10:20:03.613777  757464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:20:03.678581  757464 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-01 10:20:03.667815563 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:20:03.679238  757464 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-556573 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:20:03.681237  757464 out.go:179] * Pausing node old-k8s-version-556573 ... 
	I1101 10:20:03.682380  757464 host.go:66] Checking if "old-k8s-version-556573" exists ...
	I1101 10:20:03.682670  757464 ssh_runner.go:195] Run: systemctl --version
	I1101 10:20:03.682726  757464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:20:03.700900  757464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:20:03.803427  757464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:20:03.817589  757464 pause.go:52] kubelet running: true
	I1101 10:20:03.817665  757464 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:20:03.987346  757464 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:20:03.987455  757464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:20:04.065342  757464 cri.go:89] found id: "eb353e58c0fc17fac5140bb533292ff0eede9c2a117a3f00b2eda7320c1197f4"
	I1101 10:20:04.065377  757464 cri.go:89] found id: "17a38fc632529ff81911abfb211dcd7b07d60fd60c225ccae529e36e62d8b497"
	I1101 10:20:04.065384  757464 cri.go:89] found id: "afb66b64e1b12d5df0e760a5855c578f0d4a4b6656cb02a4aee48ff926e6c3ed"
	I1101 10:20:04.065389  757464 cri.go:89] found id: "8fd6240f85ba7e33bc3cd42db7e4ecfbef506ccc7d5709f3945a260b4406ba64"
	I1101 10:20:04.065394  757464 cri.go:89] found id: "39fe07ee60bf7ed7e063e6b8673b642d58d70c7d696018d876b8bdb6e0d86d70"
	I1101 10:20:04.065412  757464 cri.go:89] found id: "f7ba02ac9362802eef20c5f8870a35d429e636eb86c22620f260caf726977133"
	I1101 10:20:04.065422  757464 cri.go:89] found id: "898589e23f303c22d96fcb1dea82d386d8e8ed945f8c83a07c7f63c935471dbd"
	I1101 10:20:04.065424  757464 cri.go:89] found id: "def0c7222196bef86484e9e3c0a80fd1e6c0281c8d8ab1bbf3ec0fb56299940b"
	I1101 10:20:04.065427  757464 cri.go:89] found id: "34df676c07e5e1c97b53a43963c2ebbd436e0bd1bf7587e9f70aea3ccac71699"
	I1101 10:20:04.065433  757464 cri.go:89] found id: "1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc"
	I1101 10:20:04.065438  757464 cri.go:89] found id: "60c3ea523dc7210a6abdb204c3151d0227b798a7fb181e25b264e4e9037ad6a7"
	I1101 10:20:04.065441  757464 cri.go:89] found id: ""
	I1101 10:20:04.065485  757464 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:20:04.078824  757464 retry.go:31] will retry after 323.008753ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:20:04Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:20:04.402474  757464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:20:04.417447  757464 pause.go:52] kubelet running: false
	I1101 10:20:04.417524  757464 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:20:04.561272  757464 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:20:04.561382  757464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:20:04.642827  757464 cri.go:89] found id: "eb353e58c0fc17fac5140bb533292ff0eede9c2a117a3f00b2eda7320c1197f4"
	I1101 10:20:04.642920  757464 cri.go:89] found id: "17a38fc632529ff81911abfb211dcd7b07d60fd60c225ccae529e36e62d8b497"
	I1101 10:20:04.642925  757464 cri.go:89] found id: "afb66b64e1b12d5df0e760a5855c578f0d4a4b6656cb02a4aee48ff926e6c3ed"
	I1101 10:20:04.642929  757464 cri.go:89] found id: "8fd6240f85ba7e33bc3cd42db7e4ecfbef506ccc7d5709f3945a260b4406ba64"
	I1101 10:20:04.642932  757464 cri.go:89] found id: "39fe07ee60bf7ed7e063e6b8673b642d58d70c7d696018d876b8bdb6e0d86d70"
	I1101 10:20:04.642935  757464 cri.go:89] found id: "f7ba02ac9362802eef20c5f8870a35d429e636eb86c22620f260caf726977133"
	I1101 10:20:04.642937  757464 cri.go:89] found id: "898589e23f303c22d96fcb1dea82d386d8e8ed945f8c83a07c7f63c935471dbd"
	I1101 10:20:04.642939  757464 cri.go:89] found id: "def0c7222196bef86484e9e3c0a80fd1e6c0281c8d8ab1bbf3ec0fb56299940b"
	I1101 10:20:04.642942  757464 cri.go:89] found id: "34df676c07e5e1c97b53a43963c2ebbd436e0bd1bf7587e9f70aea3ccac71699"
	I1101 10:20:04.642955  757464 cri.go:89] found id: "1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc"
	I1101 10:20:04.642958  757464 cri.go:89] found id: "60c3ea523dc7210a6abdb204c3151d0227b798a7fb181e25b264e4e9037ad6a7"
	I1101 10:20:04.642960  757464 cri.go:89] found id: ""
	I1101 10:20:04.643000  757464 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:20:04.656713  757464 retry.go:31] will retry after 428.621713ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:20:04Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:20:05.086263  757464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:20:05.101250  757464 pause.go:52] kubelet running: false
	I1101 10:20:05.101327  757464 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:20:05.267198  757464 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:20:05.267293  757464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:20:05.340994  757464 cri.go:89] found id: "eb353e58c0fc17fac5140bb533292ff0eede9c2a117a3f00b2eda7320c1197f4"
	I1101 10:20:05.341022  757464 cri.go:89] found id: "17a38fc632529ff81911abfb211dcd7b07d60fd60c225ccae529e36e62d8b497"
	I1101 10:20:05.341027  757464 cri.go:89] found id: "afb66b64e1b12d5df0e760a5855c578f0d4a4b6656cb02a4aee48ff926e6c3ed"
	I1101 10:20:05.341032  757464 cri.go:89] found id: "8fd6240f85ba7e33bc3cd42db7e4ecfbef506ccc7d5709f3945a260b4406ba64"
	I1101 10:20:05.341036  757464 cri.go:89] found id: "39fe07ee60bf7ed7e063e6b8673b642d58d70c7d696018d876b8bdb6e0d86d70"
	I1101 10:20:05.341040  757464 cri.go:89] found id: "f7ba02ac9362802eef20c5f8870a35d429e636eb86c22620f260caf726977133"
	I1101 10:20:05.341043  757464 cri.go:89] found id: "898589e23f303c22d96fcb1dea82d386d8e8ed945f8c83a07c7f63c935471dbd"
	I1101 10:20:05.341047  757464 cri.go:89] found id: "def0c7222196bef86484e9e3c0a80fd1e6c0281c8d8ab1bbf3ec0fb56299940b"
	I1101 10:20:05.341050  757464 cri.go:89] found id: "34df676c07e5e1c97b53a43963c2ebbd436e0bd1bf7587e9f70aea3ccac71699"
	I1101 10:20:05.341059  757464 cri.go:89] found id: "1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc"
	I1101 10:20:05.341063  757464 cri.go:89] found id: "60c3ea523dc7210a6abdb204c3151d0227b798a7fb181e25b264e4e9037ad6a7"
	I1101 10:20:05.341067  757464 cri.go:89] found id: ""
	I1101 10:20:05.341117  757464 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:20:05.354741  757464 retry.go:31] will retry after 376.833207ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:20:05Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:20:05.732232  757464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:20:05.747231  757464 pause.go:52] kubelet running: false
	I1101 10:20:05.747297  757464 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:20:05.894097  757464 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:20:05.894187  757464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:20:05.969109  757464 cri.go:89] found id: "eb353e58c0fc17fac5140bb533292ff0eede9c2a117a3f00b2eda7320c1197f4"
	I1101 10:20:05.969141  757464 cri.go:89] found id: "17a38fc632529ff81911abfb211dcd7b07d60fd60c225ccae529e36e62d8b497"
	I1101 10:20:05.969147  757464 cri.go:89] found id: "afb66b64e1b12d5df0e760a5855c578f0d4a4b6656cb02a4aee48ff926e6c3ed"
	I1101 10:20:05.969152  757464 cri.go:89] found id: "8fd6240f85ba7e33bc3cd42db7e4ecfbef506ccc7d5709f3945a260b4406ba64"
	I1101 10:20:05.969156  757464 cri.go:89] found id: "39fe07ee60bf7ed7e063e6b8673b642d58d70c7d696018d876b8bdb6e0d86d70"
	I1101 10:20:05.969161  757464 cri.go:89] found id: "f7ba02ac9362802eef20c5f8870a35d429e636eb86c22620f260caf726977133"
	I1101 10:20:05.969165  757464 cri.go:89] found id: "898589e23f303c22d96fcb1dea82d386d8e8ed945f8c83a07c7f63c935471dbd"
	I1101 10:20:05.969169  757464 cri.go:89] found id: "def0c7222196bef86484e9e3c0a80fd1e6c0281c8d8ab1bbf3ec0fb56299940b"
	I1101 10:20:05.969173  757464 cri.go:89] found id: "34df676c07e5e1c97b53a43963c2ebbd436e0bd1bf7587e9f70aea3ccac71699"
	I1101 10:20:05.969195  757464 cri.go:89] found id: "1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc"
	I1101 10:20:05.969197  757464 cri.go:89] found id: "60c3ea523dc7210a6abdb204c3151d0227b798a7fb181e25b264e4e9037ad6a7"
	I1101 10:20:05.969200  757464 cri.go:89] found id: ""
	I1101 10:20:05.969244  757464 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:20:05.984543  757464 out.go:203] 
	W1101 10:20:05.985876  757464 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:20:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:20:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:20:05.985925  757464 out.go:285] * 
	* 
	W1101 10:20:05.990015  757464 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:20:05.991291  757464 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-556573 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-556573
helpers_test.go:243: (dbg) docker inspect old-k8s-version-556573:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e",
	        "Created": "2025-11-01T10:17:54.292571852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 750211,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:19:07.739790612Z",
	            "FinishedAt": "2025-11-01T10:19:06.818303299Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e/hostname",
	        "HostsPath": "/var/lib/docker/containers/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e/hosts",
	        "LogPath": "/var/lib/docker/containers/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e-json.log",
	        "Name": "/old-k8s-version-556573",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-556573:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-556573",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e",
	                "LowerDir": "/var/lib/docker/overlay2/4facf36bf2fbf14ccb684b9dadf34edcc1aafb1047e6fddc098a6134e0e1cc98-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4facf36bf2fbf14ccb684b9dadf34edcc1aafb1047e6fddc098a6134e0e1cc98/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4facf36bf2fbf14ccb684b9dadf34edcc1aafb1047e6fddc098a6134e0e1cc98/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4facf36bf2fbf14ccb684b9dadf34edcc1aafb1047e6fddc098a6134e0e1cc98/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-556573",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-556573/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-556573",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-556573",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-556573",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cfe511a51a60770a4c992ec00dc1dff029279ab332cf23f8c0d746dfc58b1eb2",
	            "SandboxKey": "/var/run/docker/netns/cfe511a51a60",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-556573": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:ca:f3:37:6b:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bbcdd55cf2cbe101dd2954fd5b3da9010f13fa5cf479e04754b13ce474d6499d",
	                    "EndpointID": "f5be21f45395ab78586dd177e73d3bc3a43db69f10edffc88406a8ab2be4529c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-556573",
	                        "fa365e4464f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556573 -n old-k8s-version-556573
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556573 -n old-k8s-version-556573: exit status 2 (345.937305ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-556573 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-556573 logs -n 25: (1.227285194s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cert-options-278823                                                                                                                                                                                                                        │ cert-options-278823       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p force-systemd-flag-767379 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ delete  │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p NoKubernetes-194729 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ stop    │ -p kubernetes-upgrade-949166                                                                                                                                                                                                                  │ kubernetes-upgrade-949166 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-949166 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p NoKubernetes-194729 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ stop    │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p NoKubernetes-194729 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ ssh     │ -p NoKubernetes-194729 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ delete  │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:18 UTC │
	│ ssh     │ force-systemd-flag-767379 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ delete  │ -p force-systemd-flag-767379                                                                                                                                                                                                                  │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-556573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ stop    │ -p old-k8s-version-556573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-680879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ stop    │ -p no-preload-680879 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-556573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p no-preload-680879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ old-k8s-version-556573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p old-k8s-version-556573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:19:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:19:13.906369  751704 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:19:13.906696  751704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:19:13.906713  751704 out.go:374] Setting ErrFile to fd 2...
	I1101 10:19:13.906720  751704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:19:13.907015  751704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:19:13.907484  751704 out.go:368] Setting JSON to false
	I1101 10:19:13.908829  751704 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10891,"bootTime":1761981463,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:19:13.908989  751704 start.go:143] virtualization: kvm guest
	I1101 10:19:13.910871  751704 out.go:179] * [no-preload-680879] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:19:13.912111  751704 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:19:13.912137  751704 notify.go:221] Checking for updates...
	I1101 10:19:13.914183  751704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:19:13.915953  751704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:19:13.917094  751704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:19:13.918344  751704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:19:13.919394  751704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:19:13.921049  751704 config.go:182] Loaded profile config "no-preload-680879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:19:13.921752  751704 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:19:13.949759  751704 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:19:13.949923  751704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:19:14.026278  751704 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:19:14.014732237 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:19:14.026395  751704 docker.go:319] overlay module found
	I1101 10:19:14.028147  751704 out.go:179] * Using the docker driver based on existing profile
	I1101 10:19:14.029450  751704 start.go:309] selected driver: docker
	I1101 10:19:14.029471  751704 start.go:930] validating driver "docker" against &{Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:19:14.029573  751704 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:19:14.030242  751704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:19:14.099496  751704 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:19:14.087804922 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:19:14.099911  751704 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:19:14.099949  751704 cni.go:84] Creating CNI manager for ""
	I1101 10:19:14.100023  751704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:19:14.100075  751704 start.go:353] cluster config:
	{Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:19:14.102954  751704 out.go:179] * Starting "no-preload-680879" primary control-plane node in "no-preload-680879" cluster
	I1101 10:19:14.104054  751704 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:19:14.105351  751704 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:19:14.106399  751704 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:19:14.106532  751704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:19:14.106600  751704 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/config.json ...
	I1101 10:19:14.106728  751704 cache.go:107] acquiring lock: {Name:mke74377eb8e8f0a2186d46bf4bdde02a944c052 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106786  751704 cache.go:107] acquiring lock: {Name:mke846f8ed0eae3f666a2c55755500ad865ceb9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106802  751704 cache.go:107] acquiring lock: {Name:mk54c640473c09dfff1239ead2dd2d51481a015a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106868  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 10:19:14.106881  751704 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 172.118µs
	I1101 10:19:14.106892  751704 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 10:19:14.106892  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1101 10:19:14.106823  751704 cache.go:107] acquiring lock: {Name:mk1c05d679d90243f04dc9223673738f53287a15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106918  751704 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 123.988µs
	I1101 10:19:14.106917  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1101 10:19:14.106928  751704 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1101 10:19:14.106921  751704 cache.go:107] acquiring lock: {Name:mke53a0d558f57413c985e8c7d551691237ca10b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106924  751704 cache.go:107] acquiring lock: {Name:mkf19fdae2c3486652a390b24771bb4742a08787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106934  751704 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 169.637µs
	I1101 10:19:14.106958  751704 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1101 10:19:14.106747  751704 cache.go:107] acquiring lock: {Name:mka96111944f8dc8ebfdcd94de79dafd069ca1d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106975  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1101 10:19:14.106980  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1101 10:19:14.106987  751704 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 79.806µs
	I1101 10:19:14.106988  751704 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 69.795µs
	I1101 10:19:14.106996  751704 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1101 10:19:14.107002  751704 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1101 10:19:14.106956  751704 cache.go:107] acquiring lock: {Name:mkcd303cc659630879e706aba8fe46f709be28e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.107028  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1101 10:19:14.107028  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1101 10:19:14.107040  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1101 10:19:14.107038  751704 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 317.209µs
	I1101 10:19:14.107049  751704 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1101 10:19:14.107048  751704 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 102.264µs
	I1101 10:19:14.107042  751704 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 269.507µs
	I1101 10:19:14.107056  751704 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1101 10:19:14.107058  751704 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1101 10:19:14.107067  751704 cache.go:87] Successfully saved all images to host disk.
	I1101 10:19:14.132517  751704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:19:14.132546  751704 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:19:14.132570  751704 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:19:14.132608  751704 start.go:360] acquireMachinesLock for no-preload-680879: {Name:mkb2bd3a5c4fc957e021ade32b7982a68330a2bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.132679  751704 start.go:364] duration metric: took 48.539µs to acquireMachinesLock for "no-preload-680879"
	I1101 10:19:14.132703  751704 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:19:14.132711  751704 fix.go:54] fixHost starting: 
	I1101 10:19:14.133012  751704 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:19:14.156778  751704 fix.go:112] recreateIfNeeded on no-preload-680879: state=Stopped err=<nil>
	W1101 10:19:14.156819  751704 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:19:09.855370  734517 cri.go:89] found id: ""
	I1101 10:19:09.855400  734517 logs.go:282] 0 containers: []
	W1101 10:19:09.855411  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:09.855418  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:09.855471  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:09.885995  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:09.886022  734517 cri.go:89] found id: "5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed"
	I1101 10:19:09.886026  734517 cri.go:89] found id: ""
	I1101 10:19:09.886036  734517 logs.go:282] 2 containers: [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99 5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed]
	I1101 10:19:09.886097  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:09.890892  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:09.895212  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:09.895276  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:09.925925  734517 cri.go:89] found id: ""
	I1101 10:19:09.925964  734517 logs.go:282] 0 containers: []
	W1101 10:19:09.925974  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:09.925983  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:09.926064  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:09.957057  734517 cri.go:89] found id: ""
	I1101 10:19:09.957091  734517 logs.go:282] 0 containers: []
	W1101 10:19:09.957102  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:09.957119  734517 logs.go:123] Gathering logs for kube-controller-manager [5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed] ...
	I1101 10:19:09.957132  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed"
	I1101 10:19:09.987088  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:09.987120  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:10.029318  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:10.029372  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:10.068546  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:10.068593  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:10.140318  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:10.140368  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:10.206671  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:10.206699  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:10.206719  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:10.254465  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:10.254506  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:10.274210  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:10.274254  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:10.310826  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:10.310887  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:12.841952  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:12.842503  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:12.842563  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:12.842610  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:12.876012  734517 cri.go:89] found id: "294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:12.876047  734517 cri.go:89] found id: ""
	I1101 10:19:12.876060  734517 logs.go:282] 1 containers: [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7]
	I1101 10:19:12.876121  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:12.880716  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:12.880798  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:12.911534  734517 cri.go:89] found id: ""
	I1101 10:19:12.911561  734517 logs.go:282] 0 containers: []
	W1101 10:19:12.911569  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:12.911575  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:12.911635  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:12.949287  734517 cri.go:89] found id: ""
	I1101 10:19:12.949314  734517 logs.go:282] 0 containers: []
	W1101 10:19:12.949323  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:12.949329  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:12.949387  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:12.978640  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:12.978670  734517 cri.go:89] found id: ""
	I1101 10:19:12.978683  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:12.978760  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:12.983393  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:12.983462  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:13.015887  734517 cri.go:89] found id: ""
	I1101 10:19:13.015917  734517 logs.go:282] 0 containers: []
	W1101 10:19:13.015928  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:13.015937  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:13.016057  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:13.054914  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:13.055006  734517 cri.go:89] found id: "5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed"
	I1101 10:19:13.055015  734517 cri.go:89] found id: ""
	I1101 10:19:13.055026  734517 logs.go:282] 2 containers: [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99 5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed]
	I1101 10:19:13.055100  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:13.059806  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:13.064258  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:13.064335  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:13.094414  734517 cri.go:89] found id: ""
	I1101 10:19:13.094443  734517 logs.go:282] 0 containers: []
	W1101 10:19:13.094454  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:13.094462  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:13.094536  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:13.126617  734517 cri.go:89] found id: ""
	I1101 10:19:13.126659  734517 logs.go:282] 0 containers: []
	W1101 10:19:13.126677  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:13.126708  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:13.126724  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:13.181917  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:13.181967  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:13.222519  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:13.222550  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:13.298526  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:13.298568  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:13.319609  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:13.319661  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:13.390332  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:13.390362  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:13.390382  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:13.432147  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:13.432197  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:13.484294  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:13.484343  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:13.518497  734517 logs.go:123] Gathering logs for kube-controller-manager [5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed] ...
	I1101 10:19:13.518526  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed"
	I1101 10:19:13.706315  749992 cli_runner.go:164] Run: docker network inspect old-k8s-version-556573 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:19:13.726524  749992 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 10:19:13.731452  749992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:19:13.743248  749992 kubeadm.go:884] updating cluster {Name:old-k8s-version-556573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-556573 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:19:13.743417  749992 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:19:13.743467  749992 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:19:13.785358  749992 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:19:13.785386  749992 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:19:13.785443  749992 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:19:13.816610  749992 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:19:13.816636  749992 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:19:13.816645  749992 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1101 10:19:13.816786  749992 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-556573 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-556573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:19:13.816910  749992 ssh_runner.go:195] Run: crio config
	I1101 10:19:13.872019  749992 cni.go:84] Creating CNI manager for ""
	I1101 10:19:13.872068  749992 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:19:13.872112  749992 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:19:13.872155  749992 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-556573 NodeName:old-k8s-version-556573 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:19:13.872724  749992 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-556573"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:19:13.872809  749992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 10:19:13.882622  749992 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:19:13.882694  749992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:19:13.892412  749992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 10:19:13.908682  749992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:19:13.924825  749992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1101 10:19:13.942231  749992 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:19:13.947571  749992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:19:13.960716  749992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:19:14.068595  749992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:19:14.096121  749992 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573 for IP: 192.168.94.2
	I1101 10:19:14.096152  749992 certs.go:195] generating shared ca certs ...
	I1101 10:19:14.096176  749992 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:14.096422  749992 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:19:14.096488  749992 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:19:14.096506  749992 certs.go:257] generating profile certs ...
	I1101 10:19:14.096639  749992 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.key
	I1101 10:19:14.096727  749992 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key.91d3229f
	I1101 10:19:14.096783  749992 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.key
	I1101 10:19:14.096956  749992 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:19:14.097006  749992 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:19:14.097022  749992 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:19:14.097051  749992 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:19:14.097086  749992 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:19:14.097116  749992 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:19:14.097166  749992 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:19:14.097933  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:19:14.122097  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:19:14.146186  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:19:14.171424  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:19:14.199388  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:19:14.227146  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:19:14.248660  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:19:14.272317  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:19:14.301998  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:19:14.333403  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:19:14.354467  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:19:14.375874  749992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:19:14.391454  749992 ssh_runner.go:195] Run: openssl version
	I1101 10:19:14.400020  749992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:19:14.410531  749992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:19:14.415311  749992 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:19:14.415382  749992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:19:14.460172  749992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:19:14.472376  749992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:19:14.483536  749992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:19:14.488585  749992 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:19:14.488680  749992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:19:14.533215  749992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:19:14.544014  749992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:19:14.554184  749992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:19:14.558978  749992 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:19:14.559057  749992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:19:14.601539  749992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:19:14.611265  749992 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:19:14.616160  749992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:19:14.665063  749992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:19:14.723667  749992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:19:14.780955  749992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:19:14.842737  749992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:19:14.887691  749992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:19:14.929915  749992 kubeadm.go:401] StartCluster: {Name:old-k8s-version-556573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-556573 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:19:14.930067  749992 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:19:14.930158  749992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:19:14.969523  749992 cri.go:89] found id: "f7ba02ac9362802eef20c5f8870a35d429e636eb86c22620f260caf726977133"
	I1101 10:19:14.969557  749992 cri.go:89] found id: "898589e23f303c22d96fcb1dea82d386d8e8ed945f8c83a07c7f63c935471dbd"
	I1101 10:19:14.969562  749992 cri.go:89] found id: "def0c7222196bef86484e9e3c0a80fd1e6c0281c8d8ab1bbf3ec0fb56299940b"
	I1101 10:19:14.969568  749992 cri.go:89] found id: "34df676c07e5e1c97b53a43963c2ebbd436e0bd1bf7587e9f70aea3ccac71699"
	I1101 10:19:14.969572  749992 cri.go:89] found id: ""
	I1101 10:19:14.969624  749992 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:19:14.984310  749992 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:19:14Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:19:14.984386  749992 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:19:14.995019  749992 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:19:14.995046  749992 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:19:14.995096  749992 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:19:15.005083  749992 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:19:15.005942  749992 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-556573" does not appear in /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:19:15.006345  749992 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-514161/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-556573" cluster setting kubeconfig missing "old-k8s-version-556573" context setting]
	I1101 10:19:15.006965  749992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:15.008856  749992 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:19:15.018284  749992 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1101 10:19:15.018329  749992 kubeadm.go:602] duration metric: took 23.275022ms to restartPrimaryControlPlane
	I1101 10:19:15.018342  749992 kubeadm.go:403] duration metric: took 88.447176ms to StartCluster
	I1101 10:19:15.018362  749992 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:15.018444  749992 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:19:15.019454  749992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:15.019729  749992 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:19:15.019806  749992 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:19:15.019931  749992 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-556573"
	I1101 10:19:15.019968  749992 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-556573"
	W1101 10:19:15.019980  749992 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:19:15.020001  749992 config.go:182] Loaded profile config "old-k8s-version-556573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:19:15.020026  749992 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-556573"
	I1101 10:19:15.020012  749992 host.go:66] Checking if "old-k8s-version-556573" exists ...
	I1101 10:19:15.020057  749992 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-556573"
	I1101 10:19:15.020004  749992 addons.go:70] Setting dashboard=true in profile "old-k8s-version-556573"
	I1101 10:19:15.020114  749992 addons.go:239] Setting addon dashboard=true in "old-k8s-version-556573"
	W1101 10:19:15.020125  749992 addons.go:248] addon dashboard should already be in state true
	I1101 10:19:15.020159  749992 host.go:66] Checking if "old-k8s-version-556573" exists ...
	I1101 10:19:15.020401  749992 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:19:15.020578  749992 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:19:15.020658  749992 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:19:15.024738  749992 out.go:179] * Verifying Kubernetes components...
	I1101 10:19:15.026339  749992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:19:15.047381  749992 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-556573"
	W1101 10:19:15.047412  749992 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:19:15.047445  749992 host.go:66] Checking if "old-k8s-version-556573" exists ...
	I1101 10:19:15.047967  749992 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:19:15.048128  749992 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:19:15.049318  749992 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:19:15.049364  749992 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:19:15.049382  749992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:19:15.049447  749992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:19:15.051540  749992 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:19:15.053825  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:19:15.053868  749992 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:19:15.053951  749992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:19:15.076026  749992 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:19:15.076054  749992 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:19:15.076121  749992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:19:15.081981  749992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:19:15.090592  749992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:19:15.107213  749992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:19:15.184207  749992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:19:15.201303  749992 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-556573" to be "Ready" ...
	I1101 10:19:15.211343  749992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:19:15.221660  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:19:15.221771  749992 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:19:15.235476  749992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:19:15.243708  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:19:15.243750  749992 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:19:15.263411  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:19:15.263447  749992 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:19:15.283814  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:19:15.283865  749992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:19:15.302435  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:19:15.302463  749992 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:19:15.319985  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:19:15.320026  749992 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:19:15.336028  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:19:15.336058  749992 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:19:15.352358  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:19:15.352400  749992 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:19:15.368234  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:19:15.368266  749992 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:19:15.383330  749992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:19:17.246248  749992 node_ready.go:49] node "old-k8s-version-556573" is "Ready"
	I1101 10:19:17.246302  749992 node_ready.go:38] duration metric: took 2.044967908s for node "old-k8s-version-556573" to be "Ready" ...
	I1101 10:19:17.246323  749992 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:19:17.246395  749992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:19:17.939894  749992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.728461253s)
	I1101 10:19:17.939984  749992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.704466481s)
	I1101 10:19:18.309222  749992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.925834389s)
	I1101 10:19:18.309268  749992 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.062847788s)
	I1101 10:19:18.309289  749992 api_server.go:72] duration metric: took 3.289529128s to wait for apiserver process to appear ...
	I1101 10:19:18.309295  749992 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:19:18.309317  749992 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 10:19:18.310675  749992 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-556573 addons enable metrics-server
	
	I1101 10:19:18.312581  749992 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 10:19:14.158542  751704 out.go:252] * Restarting existing docker container for "no-preload-680879" ...
	I1101 10:19:14.158664  751704 cli_runner.go:164] Run: docker start no-preload-680879
	I1101 10:19:14.451848  751704 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:19:14.473899  751704 kic.go:430] container "no-preload-680879" state is running.
	I1101 10:19:14.474323  751704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-680879
	I1101 10:19:14.494893  751704 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/config.json ...
	I1101 10:19:14.495209  751704 machine.go:94] provisionDockerMachine start ...
	I1101 10:19:14.495304  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:14.516210  751704 main.go:143] libmachine: Using SSH client type: native
	I1101 10:19:14.516592  751704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1101 10:19:14.516612  751704 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:19:14.517488  751704 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40124->127.0.0.1:33188: read: connection reset by peer
	I1101 10:19:17.671083  751704 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-680879
	
	I1101 10:19:17.671116  751704 ubuntu.go:182] provisioning hostname "no-preload-680879"
	I1101 10:19:17.671183  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:17.693711  751704 main.go:143] libmachine: Using SSH client type: native
	I1101 10:19:17.694046  751704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1101 10:19:17.694069  751704 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-680879 && echo "no-preload-680879" | sudo tee /etc/hostname
	I1101 10:19:17.865511  751704 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-680879
	
	I1101 10:19:17.865598  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:17.885170  751704 main.go:143] libmachine: Using SSH client type: native
	I1101 10:19:17.885510  751704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1101 10:19:17.885535  751704 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-680879' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-680879/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-680879' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:19:18.039391  751704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:19:18.039441  751704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:19:18.039471  751704 ubuntu.go:190] setting up certificates
	I1101 10:19:18.039488  751704 provision.go:84] configureAuth start
	I1101 10:19:18.039556  751704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-680879
	I1101 10:19:18.060079  751704 provision.go:143] copyHostCerts
	I1101 10:19:18.060161  751704 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:19:18.060186  751704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:19:18.060285  751704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:19:18.060447  751704 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:19:18.060461  751704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:19:18.060504  751704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:19:18.060591  751704 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:19:18.060603  751704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:19:18.060641  751704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:19:18.060713  751704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.no-preload-680879 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-680879]
	I1101 10:19:18.373054  751704 provision.go:177] copyRemoteCerts
	I1101 10:19:18.373135  751704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:19:18.373202  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:18.396141  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:18.506746  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:19:18.535033  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:19:18.566780  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:19:18.594786  751704 provision.go:87] duration metric: took 555.279346ms to configureAuth
	I1101 10:19:18.594824  751704 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:19:18.595042  751704 config.go:182] Loaded profile config "no-preload-680879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:19:18.595177  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:18.616703  751704 main.go:143] libmachine: Using SSH client type: native
	I1101 10:19:18.616951  751704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1101 10:19:18.616972  751704 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:19:16.064474  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:16.065057  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:16.065118  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:16.065173  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:16.097289  734517 cri.go:89] found id: "294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:16.097313  734517 cri.go:89] found id: ""
	I1101 10:19:16.097324  734517 logs.go:282] 1 containers: [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7]
	I1101 10:19:16.097390  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:16.102090  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:16.102169  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:16.133466  734517 cri.go:89] found id: ""
	I1101 10:19:16.133501  734517 logs.go:282] 0 containers: []
	W1101 10:19:16.133511  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:16.133519  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:16.133585  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:16.164076  734517 cri.go:89] found id: ""
	I1101 10:19:16.164104  734517 logs.go:282] 0 containers: []
	W1101 10:19:16.164113  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:16.164120  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:16.164181  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:16.197390  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:16.197420  734517 cri.go:89] found id: ""
	I1101 10:19:16.197432  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:16.197502  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:16.202249  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:16.202319  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:16.237786  734517 cri.go:89] found id: ""
	I1101 10:19:16.237821  734517 logs.go:282] 0 containers: []
	W1101 10:19:16.237832  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:16.237867  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:16.237931  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:16.271050  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:16.271077  734517 cri.go:89] found id: ""
	I1101 10:19:16.271088  734517 logs.go:282] 1 containers: [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:16.271232  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:16.276136  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:16.276226  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:16.309952  734517 cri.go:89] found id: ""
	I1101 10:19:16.309981  734517 logs.go:282] 0 containers: []
	W1101 10:19:16.309989  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:16.309995  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:16.310077  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:16.346364  734517 cri.go:89] found id: ""
	I1101 10:19:16.346402  734517 logs.go:282] 0 containers: []
	W1101 10:19:16.346414  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:16.346429  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:16.346447  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:16.429966  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:16.430014  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:16.453622  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:16.453662  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:16.524270  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:16.524299  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:16.524317  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:16.563420  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:16.563474  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:16.622109  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:16.622152  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:16.656486  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:16.656525  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:16.708697  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:16.708750  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:19.247958  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:19.248479  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:19.248544  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:19.248609  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:19.292223  734517 cri.go:89] found id: "294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:19.292305  734517 cri.go:89] found id: ""
	I1101 10:19:19.292318  734517 logs.go:282] 1 containers: [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7]
	I1101 10:19:19.292379  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:19.298153  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:19.298252  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:19.334336  734517 cri.go:89] found id: ""
	I1101 10:19:19.334364  734517 logs.go:282] 0 containers: []
	W1101 10:19:19.334372  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:19.334379  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:19.334425  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:19.368799  734517 cri.go:89] found id: ""
	I1101 10:19:19.368831  734517 logs.go:282] 0 containers: []
	W1101 10:19:19.368852  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:19.368861  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:19.368922  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:19.404579  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:19.404611  734517 cri.go:89] found id: ""
	I1101 10:19:19.404623  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:19.404693  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:19.409229  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:19.409312  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:19.439614  734517 cri.go:89] found id: ""
	I1101 10:19:19.439649  734517 logs.go:282] 0 containers: []
	W1101 10:19:19.439660  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:19.439668  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:19.439739  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:19.471181  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:19.471207  734517 cri.go:89] found id: ""
	I1101 10:19:19.471218  734517 logs.go:282] 1 containers: [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:19.471275  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:19.475921  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:19.475991  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:19.506647  734517 cri.go:89] found id: ""
	I1101 10:19:19.506677  734517 logs.go:282] 0 containers: []
	W1101 10:19:19.506686  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:19.506692  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:19.506764  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:19.538745  734517 cri.go:89] found id: ""
	I1101 10:19:19.538781  734517 logs.go:282] 0 containers: []
	W1101 10:19:19.538793  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:19.538807  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:19.538820  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:19.619331  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:19.619435  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:19.642129  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:19.642175  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:19.707798  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:19.707820  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:19.707871  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:19.748329  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:19.748362  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:19.797120  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:19.797153  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:19.828136  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:19.828177  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:18.954445  751704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:19:18.954488  751704 machine.go:97] duration metric: took 4.459254718s to provisionDockerMachine
	I1101 10:19:18.954505  751704 start.go:293] postStartSetup for "no-preload-680879" (driver="docker")
	I1101 10:19:18.954520  751704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:19:18.954592  751704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:19:18.954646  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:18.975955  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:19.081641  751704 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:19:19.085894  751704 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:19:19.085933  751704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:19:19.085946  751704 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:19:19.086013  751704 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:19:19.086087  751704 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:19:19.086178  751704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:19:19.095576  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:19:19.115984  751704 start.go:296] duration metric: took 161.458399ms for postStartSetup
	I1101 10:19:19.116064  751704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:19:19.116107  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:19.134184  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:19.234946  751704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:19:19.240054  751704 fix.go:56] duration metric: took 5.107333091s for fixHost
	I1101 10:19:19.240087  751704 start.go:83] releasing machines lock for "no-preload-680879", held for 5.1073946s
	I1101 10:19:19.240161  751704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-680879
	I1101 10:19:19.262761  751704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:19:19.262795  751704 ssh_runner.go:195] Run: cat /version.json
	I1101 10:19:19.262868  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:19.262881  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:19.289084  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:19.289094  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:19.459262  751704 ssh_runner.go:195] Run: systemctl --version
	I1101 10:19:19.467531  751704 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:19:19.508897  751704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:19:19.514015  751704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:19:19.514091  751704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:19:19.523940  751704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:19:19.523965  751704 start.go:496] detecting cgroup driver to use...
	I1101 10:19:19.524001  751704 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:19:19.524047  751704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:19:19.541745  751704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:19:19.556237  751704 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:19:19.556316  751704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:19:19.574192  751704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:19:19.588810  751704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:19:19.683130  751704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:19:19.780941  751704 docker.go:234] disabling docker service ...
	I1101 10:19:19.781011  751704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:19:19.796483  751704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:19:19.810507  751704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:19:19.917005  751704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:19:20.007056  751704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:19:20.021468  751704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:19:20.037147  751704 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:19:20.037206  751704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.047599  751704 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:19:20.047677  751704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.058531  751704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.069246  751704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.079292  751704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:19:20.088398  751704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.098679  751704 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.110455  751704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.120893  751704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:19:20.129381  751704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:19:20.138135  751704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:19:20.226092  751704 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:19:20.346828  751704 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:19:20.346919  751704 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:19:20.351801  751704 start.go:564] Will wait 60s for crictl version
	I1101 10:19:20.351876  751704 ssh_runner.go:195] Run: which crictl
	I1101 10:19:20.356247  751704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:19:20.384685  751704 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:19:20.384783  751704 ssh_runner.go:195] Run: crio --version
	I1101 10:19:20.415698  751704 ssh_runner.go:195] Run: crio --version
	I1101 10:19:20.447467  751704 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:19:20.448398  751704 cli_runner.go:164] Run: docker network inspect no-preload-680879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:19:20.466053  751704 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:19:20.470688  751704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:19:20.482429  751704 kubeadm.go:884] updating cluster {Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:19:20.482569  751704 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:19:20.482613  751704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:19:20.516114  751704 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:19:20.516138  751704 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:19:20.516146  751704 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:19:20.516264  751704 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-680879 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:19:20.516329  751704 ssh_runner.go:195] Run: crio config
	I1101 10:19:20.565114  751704 cni.go:84] Creating CNI manager for ""
	I1101 10:19:20.565138  751704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:19:20.565159  751704 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:19:20.565183  751704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-680879 NodeName:no-preload-680879 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:19:20.565324  751704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-680879"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:19:20.565388  751704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:19:20.574785  751704 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:19:20.574892  751704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:19:20.583796  751704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:19:20.598416  751704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:19:20.611988  751704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1101 10:19:20.625018  751704 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:19:20.629192  751704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:19:20.640027  751704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:19:20.724122  751704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:19:20.750501  751704 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879 for IP: 192.168.85.2
	I1101 10:19:20.750536  751704 certs.go:195] generating shared ca certs ...
	I1101 10:19:20.750569  751704 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:20.750745  751704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:19:20.750800  751704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:19:20.750813  751704 certs.go:257] generating profile certs ...
	I1101 10:19:20.750949  751704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.key
	I1101 10:19:20.751023  751704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key.0ccb300d
	I1101 10:19:20.751079  751704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.key
	I1101 10:19:20.751235  751704 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:19:20.751276  751704 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:19:20.751289  751704 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:19:20.751321  751704 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:19:20.751356  751704 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:19:20.751388  751704 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:19:20.751444  751704 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:19:20.752339  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:19:20.772518  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:19:20.793515  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:19:20.815510  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:19:20.839357  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:19:20.861083  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:19:20.881889  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:19:20.902415  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:19:20.923281  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:19:20.945512  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:19:20.967695  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:19:20.989326  751704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:19:21.005316  751704 ssh_runner.go:195] Run: openssl version
	I1101 10:19:21.012429  751704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:19:21.023160  751704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:19:21.027812  751704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:19:21.027916  751704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:19:21.066176  751704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:19:21.076944  751704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:19:21.087446  751704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:19:21.092261  751704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:19:21.092351  751704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:19:21.129032  751704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:19:21.139051  751704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:19:21.149537  751704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:19:21.154578  751704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:19:21.154648  751704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:19:21.193050  751704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:19:21.203218  751704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:19:21.208012  751704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:19:21.245512  751704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:19:21.295248  751704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:19:21.335754  751704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:19:21.381868  751704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:19:21.440668  751704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:19:21.502709  751704 kubeadm.go:401] StartCluster: {Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:19:21.502902  751704 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:19:21.502985  751704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:19:21.538378  751704 cri.go:89] found id: "6fe1794e14c177d264a3e5610bef578069b247e5deb7054c93fb9a70b2ccf7ba"
	I1101 10:19:21.538406  751704 cri.go:89] found id: "a1a084abd5f06aa1899bd7372a8496c6c8eb79b98488279f9c9679a6c0338270"
	I1101 10:19:21.538412  751704 cri.go:89] found id: "8a355ad3dea63414c9311a3f417e38b58b4c399b8aa2b4497aea7e6cd9510af8"
	I1101 10:19:21.538418  751704 cri.go:89] found id: "be916f84dfad93d8e52891dd7a642ef5783afd3b0e1978d42fc11b92d8812a08"
	I1101 10:19:21.538423  751704 cri.go:89] found id: ""
	I1101 10:19:21.538481  751704 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:19:21.553436  751704 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:19:21Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:19:21.553541  751704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:19:21.564329  751704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:19:21.564357  751704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:19:21.564418  751704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:19:21.574610  751704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:19:21.575434  751704 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-680879" does not appear in /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:19:21.575918  751704 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-514161/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-680879" cluster setting kubeconfig missing "no-preload-680879" context setting]
	I1101 10:19:21.576605  751704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:21.578372  751704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:19:21.588950  751704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:19:21.588998  751704 kubeadm.go:602] duration metric: took 24.634289ms to restartPrimaryControlPlane
	I1101 10:19:21.589012  751704 kubeadm.go:403] duration metric: took 86.317698ms to StartCluster
	I1101 10:19:21.589036  751704 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:21.589124  751704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:19:21.591071  751704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:21.591409  751704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:19:21.591548  751704 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:19:21.591659  751704 addons.go:70] Setting storage-provisioner=true in profile "no-preload-680879"
	I1101 10:19:21.591674  751704 config.go:182] Loaded profile config "no-preload-680879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:19:21.591684  751704 addons.go:239] Setting addon storage-provisioner=true in "no-preload-680879"
	W1101 10:19:21.591692  751704 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:19:21.591693  751704 addons.go:70] Setting dashboard=true in profile "no-preload-680879"
	I1101 10:19:21.591716  751704 addons.go:239] Setting addon dashboard=true in "no-preload-680879"
	I1101 10:19:21.591724  751704 addons.go:70] Setting default-storageclass=true in profile "no-preload-680879"
	W1101 10:19:21.591734  751704 addons.go:248] addon dashboard should already be in state true
	I1101 10:19:21.591742  751704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-680879"
	I1101 10:19:21.591763  751704 host.go:66] Checking if "no-preload-680879" exists ...
	I1101 10:19:21.591726  751704 host.go:66] Checking if "no-preload-680879" exists ...
	I1101 10:19:21.592128  751704 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:19:21.592358  751704 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:19:21.592395  751704 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:19:21.595061  751704 out.go:179] * Verifying Kubernetes components...
	I1101 10:19:21.596505  751704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:19:21.620285  751704 addons.go:239] Setting addon default-storageclass=true in "no-preload-680879"
	W1101 10:19:21.620312  751704 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:19:21.620343  751704 host.go:66] Checking if "no-preload-680879" exists ...
	I1101 10:19:21.620908  751704 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:19:21.623328  751704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:19:21.623338  751704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:19:21.624570  751704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:19:21.624607  751704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:19:21.624583  751704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:19:18.313513  749992 addons.go:515] duration metric: took 3.293714409s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 10:19:18.314886  749992 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 10:19:18.314911  749992 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 10:19:18.809396  749992 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 10:19:18.814293  749992 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 10:19:18.815913  749992 api_server.go:141] control plane version: v1.28.0
	I1101 10:19:18.815948  749992 api_server.go:131] duration metric: took 506.644406ms to wait for apiserver health ...
	I1101 10:19:18.815958  749992 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:19:18.827251  749992 system_pods.go:59] 8 kube-system pods found
	I1101 10:19:18.827308  749992 system_pods.go:61] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:19:18.827323  749992 system_pods.go:61] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:19:18.827338  749992 system_pods.go:61] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:19:18.827347  749992 system_pods.go:61] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:19:18.827354  749992 system_pods.go:61] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:19:18.827363  749992 system_pods.go:61] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:19:18.827370  749992 system_pods.go:61] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:19:18.827378  749992 system_pods.go:61] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:19:18.827388  749992 system_pods.go:74] duration metric: took 11.422494ms to wait for pod list to return data ...
	I1101 10:19:18.827399  749992 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:19:18.831006  749992 default_sa.go:45] found service account: "default"
	I1101 10:19:18.831052  749992 default_sa.go:55] duration metric: took 3.645079ms for default service account to be created ...
	I1101 10:19:18.831065  749992 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:19:18.837717  749992 system_pods.go:86] 8 kube-system pods found
	I1101 10:19:18.837765  749992 system_pods.go:89] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:19:18.837780  749992 system_pods.go:89] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:19:18.837791  749992 system_pods.go:89] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:19:18.837803  749992 system_pods.go:89] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:19:18.837812  749992 system_pods.go:89] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:19:18.837821  749992 system_pods.go:89] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:19:18.837828  749992 system_pods.go:89] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:19:18.837848  749992 system_pods.go:89] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:19:18.837862  749992 system_pods.go:126] duration metric: took 6.787789ms to wait for k8s-apps to be running ...
	I1101 10:19:18.837872  749992 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:19:18.837930  749992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:19:18.855707  749992 system_svc.go:56] duration metric: took 17.820674ms WaitForService to wait for kubelet
	I1101 10:19:18.855745  749992 kubeadm.go:587] duration metric: took 3.835985401s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:19:18.855768  749992 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:19:18.858938  749992 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:19:18.858968  749992 node_conditions.go:123] node cpu capacity is 8
	I1101 10:19:18.858982  749992 node_conditions.go:105] duration metric: took 3.208896ms to run NodePressure ...
	I1101 10:19:18.858995  749992 start.go:242] waiting for startup goroutines ...
	I1101 10:19:18.859002  749992 start.go:247] waiting for cluster config update ...
	I1101 10:19:18.859013  749992 start.go:256] writing updated cluster config ...
	I1101 10:19:18.859268  749992 ssh_runner.go:195] Run: rm -f paused
	I1101 10:19:18.863732  749992 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:19:18.868963  749992 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-cprx9" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:19:20.875614  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	I1101 10:19:21.624699  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:21.625869  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:19:21.625900  751704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:19:21.625985  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:21.655924  751704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:19:21.655951  751704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:19:21.656033  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:21.658198  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:21.665947  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:21.684777  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:21.775433  751704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:19:21.791481  751704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:19:21.793208  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:19:21.793237  751704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:19:21.795242  751704 node_ready.go:35] waiting up to 6m0s for node "no-preload-680879" to be "Ready" ...
	I1101 10:19:21.809200  751704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:19:21.815183  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:19:21.815215  751704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:19:21.842695  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:19:21.842811  751704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:19:21.868910  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:19:21.868943  751704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:19:21.890585  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:19:21.890619  751704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:19:21.908119  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:19:21.908149  751704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:19:21.926133  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:19:21.926165  751704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:19:21.943110  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:19:21.943140  751704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:19:21.959502  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:19:21.959536  751704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:19:21.977211  751704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:19:23.222263  751704 node_ready.go:49] node "no-preload-680879" is "Ready"
	I1101 10:19:23.222318  751704 node_ready.go:38] duration metric: took 1.427019057s for node "no-preload-680879" to be "Ready" ...
	I1101 10:19:23.222338  751704 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:19:23.222404  751704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:19:23.746820  751704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.95529649s)
	I1101 10:19:23.746900  751704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.937678754s)
	I1101 10:19:23.747166  751704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.769905549s)
	I1101 10:19:23.747199  751704 api_server.go:72] duration metric: took 2.155750455s to wait for apiserver process to appear ...
	I1101 10:19:23.747216  751704 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:19:23.747238  751704 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:19:23.748776  751704 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-680879 addons enable metrics-server
	
	I1101 10:19:23.751489  751704 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:19:23.751521  751704 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:19:23.755321  751704 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 10:19:23.756169  751704 addons.go:515] duration metric: took 2.16462668s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 10:19:19.896786  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:19.896847  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:22.432926  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:22.433429  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:22.433483  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:22.433571  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:22.470949  734517 cri.go:89] found id: "294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:22.470976  734517 cri.go:89] found id: ""
	I1101 10:19:22.470988  734517 logs.go:282] 1 containers: [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7]
	I1101 10:19:22.471043  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:22.476694  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:22.476768  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:22.510762  734517 cri.go:89] found id: ""
	I1101 10:19:22.510796  734517 logs.go:282] 0 containers: []
	W1101 10:19:22.510807  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:22.510815  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:22.510885  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:22.547801  734517 cri.go:89] found id: ""
	I1101 10:19:22.547861  734517 logs.go:282] 0 containers: []
	W1101 10:19:22.547873  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:22.547882  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:22.547941  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:22.583315  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:22.583341  734517 cri.go:89] found id: ""
	I1101 10:19:22.583352  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:22.583426  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:22.588943  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:22.589045  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:22.628933  734517 cri.go:89] found id: ""
	I1101 10:19:22.628969  734517 logs.go:282] 0 containers: []
	W1101 10:19:22.628980  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:22.628989  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:22.629058  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:22.665509  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:22.665537  734517 cri.go:89] found id: ""
	I1101 10:19:22.665550  734517 logs.go:282] 1 containers: [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:22.665614  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:22.671002  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:22.671079  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:22.703400  734517 cri.go:89] found id: ""
	I1101 10:19:22.703431  734517 logs.go:282] 0 containers: []
	W1101 10:19:22.703442  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:22.703450  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:22.703519  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:22.738119  734517 cri.go:89] found id: ""
	I1101 10:19:22.738157  734517 logs.go:282] 0 containers: []
	W1101 10:19:22.738179  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:22.738195  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:22.738210  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:22.809674  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:22.809699  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:22.809717  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:22.849950  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:22.849990  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:22.906141  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:22.906186  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:22.936474  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:22.936509  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:22.982323  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:22.982374  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:23.026207  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:23.026247  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:23.114983  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:23.115100  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1101 10:19:22.876618  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:25.375173  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:27.375348  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	I1101 10:19:24.247750  751704 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:19:24.252681  751704 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:19:24.252728  751704 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:19:24.747366  751704 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:19:24.751788  751704 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:19:24.752925  751704 api_server.go:141] control plane version: v1.34.1
	I1101 10:19:24.752953  751704 api_server.go:131] duration metric: took 1.005725599s to wait for apiserver health ...
	I1101 10:19:24.752962  751704 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:19:24.756509  751704 system_pods.go:59] 8 kube-system pods found
	I1101 10:19:24.756547  751704 system_pods.go:61] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:19:24.756556  751704 system_pods.go:61] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:19:24.756566  751704 system_pods.go:61] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:19:24.756575  751704 system_pods.go:61] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:19:24.756583  751704 system_pods.go:61] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:19:24.756593  751704 system_pods.go:61] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:19:24.756601  751704 system_pods.go:61] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:19:24.756617  751704 system_pods.go:61] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:19:24.756632  751704 system_pods.go:74] duration metric: took 3.660816ms to wait for pod list to return data ...
	I1101 10:19:24.756644  751704 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:19:24.759226  751704 default_sa.go:45] found service account: "default"
	I1101 10:19:24.759251  751704 default_sa.go:55] duration metric: took 2.59663ms for default service account to be created ...
	I1101 10:19:24.759263  751704 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:19:24.762366  751704 system_pods.go:86] 8 kube-system pods found
	I1101 10:19:24.762401  751704 system_pods.go:89] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:19:24.762408  751704 system_pods.go:89] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:19:24.762414  751704 system_pods.go:89] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:19:24.762419  751704 system_pods.go:89] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:19:24.762424  751704 system_pods.go:89] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:19:24.762430  751704 system_pods.go:89] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:19:24.762444  751704 system_pods.go:89] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:19:24.762451  751704 system_pods.go:89] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:19:24.762462  751704 system_pods.go:126] duration metric: took 3.19248ms to wait for k8s-apps to be running ...
	I1101 10:19:24.762470  751704 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:19:24.762527  751704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:19:24.776310  751704 system_svc.go:56] duration metric: took 13.822575ms WaitForService to wait for kubelet
	I1101 10:19:24.776348  751704 kubeadm.go:587] duration metric: took 3.184901564s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:19:24.776374  751704 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:19:24.779587  751704 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:19:24.779617  751704 node_conditions.go:123] node cpu capacity is 8
	I1101 10:19:24.779634  751704 node_conditions.go:105] duration metric: took 3.254067ms to run NodePressure ...
	I1101 10:19:24.779651  751704 start.go:242] waiting for startup goroutines ...
	I1101 10:19:24.779660  751704 start.go:247] waiting for cluster config update ...
	I1101 10:19:24.779676  751704 start.go:256] writing updated cluster config ...
	I1101 10:19:24.779992  751704 ssh_runner.go:195] Run: rm -f paused
	I1101 10:19:24.784439  751704 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:19:24.789934  751704 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rh4z7" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:19:26.796088  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:28.796953  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	I1101 10:19:25.639713  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1101 10:19:29.875431  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:31.875777  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:30.797265  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:32.797768  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	I1101 10:19:30.640105  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:19:30.640224  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:30.640309  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:30.669948  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:30.669974  734517 cri.go:89] found id: "294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:30.669980  734517 cri.go:89] found id: ""
	I1101 10:19:30.669990  734517 logs.go:282] 2 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7]
	I1101 10:19:30.670068  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:30.674384  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:30.678429  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:30.678513  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:30.709314  734517 cri.go:89] found id: ""
	I1101 10:19:30.709345  734517 logs.go:282] 0 containers: []
	W1101 10:19:30.709354  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:30.709361  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:30.709420  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:30.748279  734517 cri.go:89] found id: ""
	I1101 10:19:30.748310  734517 logs.go:282] 0 containers: []
	W1101 10:19:30.748322  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:30.748330  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:30.748392  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:30.790674  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:30.790703  734517 cri.go:89] found id: ""
	I1101 10:19:30.790714  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:30.790780  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:30.796615  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:30.796713  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:30.840009  734517 cri.go:89] found id: ""
	I1101 10:19:30.840048  734517 logs.go:282] 0 containers: []
	W1101 10:19:30.840060  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:30.840069  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:30.840477  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:30.884710  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:30.884740  734517 cri.go:89] found id: ""
	I1101 10:19:30.884752  734517 logs.go:282] 1 containers: [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:30.884824  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:30.890423  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:30.890501  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:30.929390  734517 cri.go:89] found id: ""
	I1101 10:19:30.929423  734517 logs.go:282] 0 containers: []
	W1101 10:19:30.929445  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:30.929455  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:30.929607  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:30.973670  734517 cri.go:89] found id: ""
	I1101 10:19:30.973704  734517 logs.go:282] 0 containers: []
	W1101 10:19:30.973715  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:30.973735  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:30.973754  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:31.004505  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:31.004541  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:34.375939  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:36.375989  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:35.296268  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:37.794812  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:38.874669  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:41.376959  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:39.796287  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:41.796342  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	I1101 10:19:41.094725  734517 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.09015546s)
	W1101 10:19:41.094778  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1101 10:19:41.094787  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:41.094800  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:41.124251  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:41.124290  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:41.180573  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:19:41.180619  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:41.216766  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:41.216811  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:41.251813  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:41.251870  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:41.303462  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:41.303504  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:41.339779  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:41.339816  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:43.915927  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1101 10:19:43.875685  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:45.875727  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:44.295613  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:46.295870  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:48.296468  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	I1101 10:19:45.568334  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:34984->192.168.103.2:8443: read: connection reset by peer
	I1101 10:19:45.568421  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:45.568493  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:45.599017  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:45.599044  734517 cri.go:89] found id: "294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:45.599050  734517 cri.go:89] found id: ""
	I1101 10:19:45.599060  734517 logs.go:282] 2 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7]
	I1101 10:19:45.599116  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:45.603584  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:45.607753  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:45.607819  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:45.636802  734517 cri.go:89] found id: ""
	I1101 10:19:45.636830  734517 logs.go:282] 0 containers: []
	W1101 10:19:45.636868  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:45.636876  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:45.636940  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:45.666802  734517 cri.go:89] found id: ""
	I1101 10:19:45.666828  734517 logs.go:282] 0 containers: []
	W1101 10:19:45.666873  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:45.666880  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:45.666932  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:45.695967  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:45.695997  734517 cri.go:89] found id: ""
	I1101 10:19:45.696008  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:45.696079  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:45.700314  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:45.700384  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:45.728533  734517 cri.go:89] found id: ""
	I1101 10:19:45.728571  734517 logs.go:282] 0 containers: []
	W1101 10:19:45.728580  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:45.728586  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:45.728648  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:45.758235  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:45.758263  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:45.758269  734517 cri.go:89] found id: ""
	I1101 10:19:45.758281  734517 logs.go:282] 2 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:45.758348  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:45.762777  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:45.766925  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:45.767004  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:45.796444  734517 cri.go:89] found id: ""
	I1101 10:19:45.796470  734517 logs.go:282] 0 containers: []
	W1101 10:19:45.796481  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:45.796488  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:45.796551  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:45.825314  734517 cri.go:89] found id: ""
	I1101 10:19:45.825342  734517 logs.go:282] 0 containers: []
	W1101 10:19:45.825354  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:45.825374  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:19:45.825391  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:45.855107  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:45.855134  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:45.885414  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:45.885442  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:45.918148  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:19:45.918184  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:45.951217  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:45.951252  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:46.006867  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:46.006915  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:46.085345  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:46.085386  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:46.104730  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:46.104766  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:46.164389  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:46.164409  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:46.164425  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:46.200848  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:46.200885  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:48.750693  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:48.751183  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:48.751240  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:48.751295  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:48.781751  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:48.781779  734517 cri.go:89] found id: ""
	I1101 10:19:48.781791  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:19:48.781864  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:48.786232  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:48.786310  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:48.816117  734517 cri.go:89] found id: ""
	I1101 10:19:48.816143  734517 logs.go:282] 0 containers: []
	W1101 10:19:48.816159  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:48.816166  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:48.816240  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:48.846244  734517 cri.go:89] found id: ""
	I1101 10:19:48.846276  734517 logs.go:282] 0 containers: []
	W1101 10:19:48.846285  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:48.846292  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:48.846352  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:48.876090  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:48.876117  734517 cri.go:89] found id: ""
	I1101 10:19:48.876126  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:48.876178  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:48.880724  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:48.880811  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:48.909280  734517 cri.go:89] found id: ""
	I1101 10:19:48.909305  734517 logs.go:282] 0 containers: []
	W1101 10:19:48.909313  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:48.909319  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:48.909385  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:48.939374  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:48.939404  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:48.939410  734517 cri.go:89] found id: ""
	I1101 10:19:48.939421  734517 logs.go:282] 2 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:48.939482  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:48.943821  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:48.948103  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:48.948164  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:48.977963  734517 cri.go:89] found id: ""
	I1101 10:19:48.977988  734517 logs.go:282] 0 containers: []
	W1101 10:19:48.977996  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:48.978002  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:48.978055  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:49.008092  734517 cri.go:89] found id: ""
	I1101 10:19:49.008120  734517 logs.go:282] 0 containers: []
	W1101 10:19:49.008131  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:49.008178  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:49.008211  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:49.068277  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:49.068309  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:49.068334  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:49.118388  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:19:49.118430  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:49.149162  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:49.149194  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:49.183198  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:49.183239  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:49.267756  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:49.267799  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:49.287461  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:19:49.287495  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:49.323714  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:49.323755  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:49.351967  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:49.351998  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1101 10:19:48.375124  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	I1101 10:19:50.375076  749992 pod_ready.go:94] pod "coredns-5dd5756b68-cprx9" is "Ready"
	I1101 10:19:50.375111  749992 pod_ready.go:86] duration metric: took 31.506116562s for pod "coredns-5dd5756b68-cprx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.377714  749992 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.381431  749992 pod_ready.go:94] pod "etcd-old-k8s-version-556573" is "Ready"
	I1101 10:19:50.381458  749992 pod_ready.go:86] duration metric: took 3.720753ms for pod "etcd-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.384145  749992 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.387975  749992 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-556573" is "Ready"
	I1101 10:19:50.388002  749992 pod_ready.go:86] duration metric: took 3.831146ms for pod "kube-apiserver-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.390725  749992 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.574275  749992 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-556573" is "Ready"
	I1101 10:19:50.574314  749992 pod_ready.go:86] duration metric: took 183.564409ms for pod "kube-controller-manager-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.774176  749992 pod_ready.go:83] waiting for pod "kube-proxy-s9fsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:51.173486  749992 pod_ready.go:94] pod "kube-proxy-s9fsm" is "Ready"
	I1101 10:19:51.173516  749992 pod_ready.go:86] duration metric: took 399.310179ms for pod "kube-proxy-s9fsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:51.374482  749992 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:51.773087  749992 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-556573" is "Ready"
	I1101 10:19:51.773122  749992 pod_ready.go:86] duration metric: took 398.611575ms for pod "kube-scheduler-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:51.773138  749992 pod_ready.go:40] duration metric: took 32.909366231s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:19:51.820290  749992 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1101 10:19:51.822627  749992 out.go:203] 
	W1101 10:19:51.823943  749992 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 10:19:51.825182  749992 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:19:51.826371  749992 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-556573" cluster and "default" namespace by default
	W1101 10:19:50.795787  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:52.796811  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	I1101 10:19:51.917898  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:51.918327  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:51.918392  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:51.918454  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:51.950042  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:51.950065  734517 cri.go:89] found id: ""
	I1101 10:19:51.950076  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:19:51.950137  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:51.954479  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:51.954556  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:51.986469  734517 cri.go:89] found id: ""
	I1101 10:19:51.986495  734517 logs.go:282] 0 containers: []
	W1101 10:19:51.986502  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:51.986509  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:51.986555  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:52.015764  734517 cri.go:89] found id: ""
	I1101 10:19:52.015794  734517 logs.go:282] 0 containers: []
	W1101 10:19:52.015805  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:52.015814  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:52.015909  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:52.044792  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:52.044815  734517 cri.go:89] found id: ""
	I1101 10:19:52.044823  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:52.044917  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:52.049731  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:52.049813  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:52.081371  734517 cri.go:89] found id: ""
	I1101 10:19:52.081402  734517 logs.go:282] 0 containers: []
	W1101 10:19:52.081414  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:52.081423  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:52.081482  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:52.114665  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:52.114704  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:52.114816  734517 cri.go:89] found id: ""
	I1101 10:19:52.114828  734517 logs.go:282] 2 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:52.115065  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:52.120950  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:52.126220  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:52.126305  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:52.161044  734517 cri.go:89] found id: ""
	I1101 10:19:52.161072  734517 logs.go:282] 0 containers: []
	W1101 10:19:52.161081  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:52.161088  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:52.161150  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:52.195536  734517 cri.go:89] found id: ""
	I1101 10:19:52.195560  734517 logs.go:282] 0 containers: []
	W1101 10:19:52.195568  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:52.195586  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:19:52.195598  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:52.236807  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:52.236871  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:52.269035  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:52.269075  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:52.357207  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:52.357253  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:52.382568  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:52.382630  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:52.445059  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:52.445081  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:52.445100  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:52.496306  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:19:52.496351  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:52.525982  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:52.526012  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:52.583145  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:52.583185  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1101 10:19:55.296501  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:57.796181  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	I1101 10:19:58.796405  751704 pod_ready.go:94] pod "coredns-66bc5c9577-rh4z7" is "Ready"
	I1101 10:19:58.796436  751704 pod_ready.go:86] duration metric: took 34.006472134s for pod "coredns-66bc5c9577-rh4z7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:58.799179  751704 pod_ready.go:83] waiting for pod "etcd-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:58.803734  751704 pod_ready.go:94] pod "etcd-no-preload-680879" is "Ready"
	I1101 10:19:58.803766  751704 pod_ready.go:86] duration metric: took 4.559043ms for pod "etcd-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:58.806246  751704 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:58.810722  751704 pod_ready.go:94] pod "kube-apiserver-no-preload-680879" is "Ready"
	I1101 10:19:58.810755  751704 pod_ready.go:86] duration metric: took 4.482193ms for pod "kube-apiserver-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:58.813105  751704 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:55.118905  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:55.119416  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:55.119479  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:55.119530  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:55.150015  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:55.150047  734517 cri.go:89] found id: ""
	I1101 10:19:55.150056  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:19:55.150106  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:55.155248  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:55.155325  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:55.186955  734517 cri.go:89] found id: ""
	I1101 10:19:55.186989  734517 logs.go:282] 0 containers: []
	W1101 10:19:55.187003  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:55.187012  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:55.187080  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:55.219523  734517 cri.go:89] found id: ""
	I1101 10:19:55.219548  734517 logs.go:282] 0 containers: []
	W1101 10:19:55.219557  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:55.219564  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:55.219615  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:55.250437  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:55.250461  734517 cri.go:89] found id: ""
	I1101 10:19:55.250471  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:55.250535  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:55.255162  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:55.255234  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:55.286379  734517 cri.go:89] found id: ""
	I1101 10:19:55.286416  734517 logs.go:282] 0 containers: []
	W1101 10:19:55.286427  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:55.286435  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:55.286512  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:55.319680  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:55.319707  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:55.319712  734517 cri.go:89] found id: ""
	I1101 10:19:55.319723  734517 logs.go:282] 2 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:55.319793  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:55.324355  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:55.328464  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:55.328548  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:55.359344  734517 cri.go:89] found id: ""
	I1101 10:19:55.359379  734517 logs.go:282] 0 containers: []
	W1101 10:19:55.359391  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:55.359399  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:55.359454  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:55.389253  734517 cri.go:89] found id: ""
	I1101 10:19:55.389285  734517 logs.go:282] 0 containers: []
	W1101 10:19:55.389294  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:55.389314  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:55.389331  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:55.408604  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:55.408658  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:55.458100  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:55.458145  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:55.488110  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:55.488149  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:55.544178  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:55.544232  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:55.603764  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:55.603791  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:19:55.603810  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:55.638460  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:19:55.638498  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:55.667868  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:55.667897  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:55.700741  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:55.700772  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:58.281556  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:58.282113  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:58.282172  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:58.282237  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:58.313748  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:58.313774  734517 cri.go:89] found id: ""
	I1101 10:19:58.313783  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:19:58.313848  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:58.318094  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:58.318154  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:58.347645  734517 cri.go:89] found id: ""
	I1101 10:19:58.347670  734517 logs.go:282] 0 containers: []
	W1101 10:19:58.347678  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:58.347693  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:58.347744  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:58.377365  734517 cri.go:89] found id: ""
	I1101 10:19:58.377394  734517 logs.go:282] 0 containers: []
	W1101 10:19:58.377408  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:58.377415  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:58.377501  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:58.406919  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:58.406943  734517 cri.go:89] found id: ""
	I1101 10:19:58.406953  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:58.407013  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:58.411320  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:58.411395  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:58.441180  734517 cri.go:89] found id: ""
	I1101 10:19:58.441210  734517 logs.go:282] 0 containers: []
	W1101 10:19:58.441221  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:58.441229  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:58.441289  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:58.471079  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:58.471107  734517 cri.go:89] found id: ""
	I1101 10:19:58.471124  734517 logs.go:282] 1 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718]
	I1101 10:19:58.471190  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:58.476014  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:58.476116  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:58.506198  734517 cri.go:89] found id: ""
	I1101 10:19:58.506243  734517 logs.go:282] 0 containers: []
	W1101 10:19:58.506255  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:58.506263  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:58.506324  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:58.539304  734517 cri.go:89] found id: ""
	I1101 10:19:58.539334  734517 logs.go:282] 0 containers: []
	W1101 10:19:58.539344  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:58.539359  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:19:58.539377  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:58.575009  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:58.575046  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:58.625036  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:19:58.625081  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:58.654912  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:58.654948  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:58.707728  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:58.707771  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:58.741875  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:58.741908  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:58.834649  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:58.834707  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:58.855809  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:58.855889  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:58.919467  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:58.994757  751704 pod_ready.go:94] pod "kube-controller-manager-no-preload-680879" is "Ready"
	I1101 10:19:58.994786  751704 pod_ready.go:86] duration metric: took 181.653237ms for pod "kube-controller-manager-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:59.195422  751704 pod_ready.go:83] waiting for pod "kube-proxy-ft2vw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:59.594857  751704 pod_ready.go:94] pod "kube-proxy-ft2vw" is "Ready"
	I1101 10:19:59.594891  751704 pod_ready.go:86] duration metric: took 399.432038ms for pod "kube-proxy-ft2vw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:59.794059  751704 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:20:00.194949  751704 pod_ready.go:94] pod "kube-scheduler-no-preload-680879" is "Ready"
	I1101 10:20:00.194993  751704 pod_ready.go:86] duration metric: took 400.90442ms for pod "kube-scheduler-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:20:00.195011  751704 pod_ready.go:40] duration metric: took 35.410529293s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:20:00.247126  751704 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:20:00.249139  751704 out.go:179] * Done! kubectl is now configured to use "no-preload-680879" cluster and "default" namespace by default
	I1101 10:20:01.420696  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:20:01.421437  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:20:01.421513  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:20:01.421585  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:20:01.452654  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:20:01.452686  734517 cri.go:89] found id: ""
	I1101 10:20:01.452697  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:20:01.452773  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:01.457474  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:20:01.457582  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:20:01.488979  734517 cri.go:89] found id: ""
	I1101 10:20:01.489008  734517 logs.go:282] 0 containers: []
	W1101 10:20:01.489019  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:20:01.489028  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:20:01.489094  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:20:01.519726  734517 cri.go:89] found id: ""
	I1101 10:20:01.519753  734517 logs.go:282] 0 containers: []
	W1101 10:20:01.519761  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:20:01.519768  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:20:01.519817  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:20:01.550139  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:01.550163  734517 cri.go:89] found id: ""
	I1101 10:20:01.550172  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:20:01.550281  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:01.554678  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:20:01.554749  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:20:01.585678  734517 cri.go:89] found id: ""
	I1101 10:20:01.585713  734517 logs.go:282] 0 containers: []
	W1101 10:20:01.585726  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:20:01.585736  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:20:01.585805  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:20:01.616144  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:01.616177  734517 cri.go:89] found id: ""
	I1101 10:20:01.616190  734517 logs.go:282] 1 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718]
	I1101 10:20:01.616264  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:01.620521  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:20:01.620597  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:20:01.650942  734517 cri.go:89] found id: ""
	I1101 10:20:01.650969  734517 logs.go:282] 0 containers: []
	W1101 10:20:01.650978  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:20:01.650984  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:20:01.651038  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:20:01.683160  734517 cri.go:89] found id: ""
	I1101 10:20:01.683193  734517 logs.go:282] 0 containers: []
	W1101 10:20:01.683206  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:20:01.683222  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:20:01.683242  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:20:01.718993  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:20:01.719036  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:01.767980  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:20:01.768024  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:01.799251  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:20:01.799285  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:20:01.858737  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:20:01.858783  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:20:01.893940  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:20:01.893970  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:20:01.980857  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:20:01.980905  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:20:02.002755  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:20:02.002794  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:20:02.064896  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:20:04.566524  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:20:04.567108  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:20:04.567192  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:20:04.567262  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:20:04.599913  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:20:04.599938  734517 cri.go:89] found id: ""
	I1101 10:20:04.599948  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:20:04.599999  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:04.604290  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:20:04.604357  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:20:04.638516  734517 cri.go:89] found id: ""
	I1101 10:20:04.638551  734517 logs.go:282] 0 containers: []
	W1101 10:20:04.638562  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:20:04.638570  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:20:04.638637  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:20:04.668368  734517 cri.go:89] found id: ""
	I1101 10:20:04.668399  734517 logs.go:282] 0 containers: []
	W1101 10:20:04.668407  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:20:04.668417  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:20:04.668476  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:20:04.699489  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:04.699512  734517 cri.go:89] found id: ""
	I1101 10:20:04.699521  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:20:04.699573  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:04.703986  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:20:04.704058  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:20:04.734280  734517 cri.go:89] found id: ""
	I1101 10:20:04.734328  734517 logs.go:282] 0 containers: []
	W1101 10:20:04.734344  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:20:04.734354  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:20:04.734424  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:20:04.763968  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:04.763993  734517 cri.go:89] found id: ""
	I1101 10:20:04.764002  734517 logs.go:282] 1 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718]
	I1101 10:20:04.764055  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:04.768504  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:20:04.768584  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:20:04.798325  734517 cri.go:89] found id: ""
	I1101 10:20:04.798360  734517 logs.go:282] 0 containers: []
	W1101 10:20:04.798371  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:20:04.798380  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:20:04.798452  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:20:04.830627  734517 cri.go:89] found id: ""
	I1101 10:20:04.830661  734517 logs.go:282] 0 containers: []
	W1101 10:20:04.830672  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:20:04.830684  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:20:04.830697  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	
	
	==> CRI-O <==
	Nov 01 10:19:36 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:36.825852582Z" level=info msg="Created container 60c3ea523dc7210a6abdb204c3151d0227b798a7fb181e25b264e4e9037ad6a7: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wrwks/kubernetes-dashboard" id=0d4c0cf4-4471-4e44-8de2-969d5f185774 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:36 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:36.826569841Z" level=info msg="Starting container: 60c3ea523dc7210a6abdb204c3151d0227b798a7fb181e25b264e4e9037ad6a7" id=3b2ab99a-5347-4440-ac6d-f78e7b2be0cf name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:19:36 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:36.828402881Z" level=info msg="Started container" PID=1712 containerID=60c3ea523dc7210a6abdb204c3151d0227b798a7fb181e25b264e4e9037ad6a7 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wrwks/kubernetes-dashboard id=3b2ab99a-5347-4440-ac6d-f78e7b2be0cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d0892a7e37ec40dccf925d91bf95c6a7631952ff9b460a7ab5c7a1364243258
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.358019938Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a97692c7-db44-45e1-8861-5b8d27039432 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.359013674Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e5668469-4491-4066-ab72-ed3d7566d8cd name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.360168835Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7a50cd59-38b4-442d-a42b-34fe40f89274 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.360339725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.364746988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.365000253Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7efd9bce602a8f703413d4bc6ac93cf2f49ccf5576287846ab43932b910c6d14/merged/etc/passwd: no such file or directory"
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.365040164Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7efd9bce602a8f703413d4bc6ac93cf2f49ccf5576287846ab43932b910c6d14/merged/etc/group: no such file or directory"
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.365338721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.399346433Z" level=info msg="Created container eb353e58c0fc17fac5140bb533292ff0eede9c2a117a3f00b2eda7320c1197f4: kube-system/storage-provisioner/storage-provisioner" id=7a50cd59-38b4-442d-a42b-34fe40f89274 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.400021439Z" level=info msg="Starting container: eb353e58c0fc17fac5140bb533292ff0eede9c2a117a3f00b2eda7320c1197f4" id=a4df1c56-c6de-40c6-b7ec-161939e0fdb8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.402019572Z" level=info msg="Started container" PID=1734 containerID=eb353e58c0fc17fac5140bb533292ff0eede9c2a117a3f00b2eda7320c1197f4 description=kube-system/storage-provisioner/storage-provisioner id=a4df1c56-c6de-40c6-b7ec-161939e0fdb8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=86ffaf279f28493105abc4d6cdef7ee4b4916318cfdc6726c7019884bd8fb66b
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.213065702Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=65673f1b-4716-4a22-9041-548fb5c30e6d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.2141801Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c8314990-d257-45b9-904b-033522077626 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.215295668Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs/dashboard-metrics-scraper" id=0d0608a7-a1c9-493d-bc75-4fdd1ebe556f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.215465586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.221515306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.222222334Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.262952532Z" level=info msg="Created container 1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs/dashboard-metrics-scraper" id=0d0608a7-a1c9-493d-bc75-4fdd1ebe556f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.2637673Z" level=info msg="Starting container: 1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc" id=6f4428a6-14f4-45ff-ab12-4efa9ea82e30 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.266227172Z" level=info msg="Started container" PID=1771 containerID=1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs/dashboard-metrics-scraper id=6f4428a6-14f4-45ff-ab12-4efa9ea82e30 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02b250581c808b724e1fe1c8794c41e7769fc7df53a3228427283892725055e1
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.371431563Z" level=info msg="Removing container: 9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12" id=85b60ce8-6a33-48bf-b5ac-57df4626d63b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.383253787Z" level=info msg="Removed container 9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs/dashboard-metrics-scraper" id=85b60ce8-6a33-48bf-b5ac-57df4626d63b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1cca6171f6e63       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   02b250581c808       dashboard-metrics-scraper-5f989dc9cf-xdjzs       kubernetes-dashboard
	eb353e58c0fc1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   86ffaf279f284       storage-provisioner                              kube-system
	60c3ea523dc72       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   30 seconds ago      Running             kubernetes-dashboard        0                   1d0892a7e37ec       kubernetes-dashboard-8694d4445c-wrwks            kubernetes-dashboard
	17a38fc632529       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           48 seconds ago      Running             coredns                     0                   a31721273572e       coredns-5dd5756b68-cprx9                         kube-system
	86f363e26ca1a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   771767b62109a       busybox                                          default
	afb66b64e1b12       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           48 seconds ago      Running             kube-proxy                  0                   ca314b9d29594       kube-proxy-s9fsm                                 kube-system
	8fd6240f85ba7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   86ffaf279f284       storage-provisioner                              kube-system
	39fe07ee60bf7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   0c8ad63b226c6       kindnet-cmzcq                                    kube-system
	f7ba02ac93628       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           52 seconds ago      Running             etcd                        0                   05b82e5667c49       etcd-old-k8s-version-556573                      kube-system
	898589e23f303       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           52 seconds ago      Running             kube-apiserver              0                   f5b0e6f9cfaf9       kube-apiserver-old-k8s-version-556573            kube-system
	def0c7222196b       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           52 seconds ago      Running             kube-scheduler              0                   37719e333ec60       kube-scheduler-old-k8s-version-556573            kube-system
	34df676c07e5e       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           52 seconds ago      Running             kube-controller-manager     0                   dbe5dbd771c26       kube-controller-manager-old-k8s-version-556573   kube-system
	
	
	==> coredns [17a38fc632529ff81911abfb211dcd7b07d60fd60c225ccae529e36e62d8b497] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38979 - 57152 "HINFO IN 2696036869424178194.8019122094304270670. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.040607808s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-556573
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-556573
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=old-k8s-version-556573
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_18_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:18:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-556573
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:19:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:19:47 +0000   Sat, 01 Nov 2025 10:18:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:19:47 +0000   Sat, 01 Nov 2025 10:18:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:19:47 +0000   Sat, 01 Nov 2025 10:18:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:19:47 +0000   Sat, 01 Nov 2025 10:18:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-556573
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                684343d3-91b0-49c0-8416-d6f599882a42
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-cprx9                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-old-k8s-version-556573                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-cmzcq                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-556573             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-556573    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-s9fsm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-556573             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-xdjzs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-wrwks             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-556573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node old-k8s-version-556573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node old-k8s-version-556573 event: Registered Node old-k8s-version-556573 in Controller
	  Normal  NodeReady                91s                  kubelet          Node old-k8s-version-556573 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)    kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)    kubelet          Node old-k8s-version-556573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)    kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                  node-controller  Node old-k8s-version-556573 event: Registered Node old-k8s-version-556573 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [f7ba02ac9362802eef20c5f8870a35d429e636eb86c22620f260caf726977133] <==
	{"level":"info","ts":"2025-11-01T10:19:14.834417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-11-01T10:19:14.834597Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:19:14.834681Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:19:14.83495Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-01T10:19:14.835202Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:19:14.83527Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:19:14.841345Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T10:19:14.841638Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T10:19:14.841697Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T10:19:14.841774Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-01T10:19:14.841801Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-01T10:19:16.209344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T10:19:16.209389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:19:16.209404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-01T10:19:16.209416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T10:19:16.209421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-01T10:19:16.209429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-11-01T10:19:16.209437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-01T10:19:16.210476Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-556573 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:19:16.210486Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:19:16.210504Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:19:16.210753Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:19:16.210782Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T10:19:16.211774Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T10:19:16.211777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 10:20:07 up  3:02,  0 user,  load average: 2.45, 3.26, 2.69
	Linux old-k8s-version-556573 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [39fe07ee60bf7ed7e063e6b8673b642d58d70c7d696018d876b8bdb6e0d86d70] <==
	I1101 10:19:18.805266       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:19:18.805523       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 10:19:18.805669       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:19:18.805689       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:19:18.805702       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:19:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:19:19.008795       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:19:19.008815       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:19:19.008824       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:19:19.008974       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:19:19.450982       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:19:19.451025       1 metrics.go:72] Registering metrics
	I1101 10:19:19.451235       1 controller.go:711] "Syncing nftables rules"
	I1101 10:19:29.008964       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:19:29.009014       1 main.go:301] handling current node
	I1101 10:19:39.009231       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:19:39.009271       1 main.go:301] handling current node
	I1101 10:19:49.009354       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:19:49.009392       1 main.go:301] handling current node
	I1101 10:19:59.009459       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:19:59.009508       1 main.go:301] handling current node
	
	
	==> kube-apiserver [898589e23f303c22d96fcb1dea82d386d8e8ed945f8c83a07c7f63c935471dbd] <==
	I1101 10:19:17.199606       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1101 10:19:17.256428       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 10:19:17.299946       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 10:19:17.300008       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 10:19:17.300068       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 10:19:17.300081       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 10:19:17.300119       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 10:19:17.300206       1 aggregator.go:166] initial CRD sync complete...
	I1101 10:19:17.300219       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 10:19:17.300226       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:19:17.300233       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:19:17.300520       1 shared_informer.go:318] Caches are synced for configmaps
	E1101 10:19:17.306165       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:19:17.338021       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:19:18.144728       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 10:19:18.184545       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 10:19:18.207015       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:19:18.212350       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:19:18.221671       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:19:18.231761       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 10:19:18.286454       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.66.66"}
	I1101 10:19:18.301514       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.27.153"}
	I1101 10:19:30.163859       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 10:19:30.190828       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 10:19:30.194594       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [34df676c07e5e1c97b53a43963c2ebbd436e0bd1bf7587e9f70aea3ccac71699] <==
	I1101 10:19:30.202165       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.134µs"
	I1101 10:19:30.204310       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="20.954631ms"
	I1101 10:19:30.206787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="23.097703ms"
	I1101 10:19:30.217112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="12.734506ms"
	I1101 10:19:30.217238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.661µs"
	I1101 10:19:30.217286       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="26.3µs"
	I1101 10:19:30.218954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.871µs"
	I1101 10:19:30.220173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.313729ms"
	I1101 10:19:30.220306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="82.786µs"
	I1101 10:19:30.227908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="116.615µs"
	I1101 10:19:30.266111       1 shared_informer.go:318] Caches are synced for disruption
	I1101 10:19:30.300187       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:19:30.386448       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:19:30.705171       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:19:30.763422       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:19:30.763459       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 10:19:33.322367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.183µs"
	I1101 10:19:34.327291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.03µs"
	I1101 10:19:35.330174       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.055µs"
	I1101 10:19:37.342658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.345868ms"
	I1101 10:19:37.342751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="56.315µs"
	I1101 10:19:50.333398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.973899ms"
	I1101 10:19:50.333539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.952µs"
	I1101 10:19:52.383048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.316µs"
	I1101 10:20:00.520806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="183.118µs"
	
	
	==> kube-proxy [afb66b64e1b12d5df0e760a5855c578f0d4a4b6656cb02a4aee48ff926e6c3ed] <==
	I1101 10:19:18.621708       1 server_others.go:69] "Using iptables proxy"
	I1101 10:19:18.631742       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1101 10:19:18.650169       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:19:18.653195       1 server_others.go:152] "Using iptables Proxier"
	I1101 10:19:18.653248       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 10:19:18.653258       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 10:19:18.653291       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 10:19:18.653572       1 server.go:846] "Version info" version="v1.28.0"
	I1101 10:19:18.653641       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:19:18.654378       1 config.go:188] "Starting service config controller"
	I1101 10:19:18.654444       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 10:19:18.655277       1 config.go:97] "Starting endpoint slice config controller"
	I1101 10:19:18.655446       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 10:19:18.655535       1 config.go:315] "Starting node config controller"
	I1101 10:19:18.655594       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 10:19:18.755762       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 10:19:18.755813       1 shared_informer.go:318] Caches are synced for node config
	I1101 10:19:18.755805       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [def0c7222196bef86484e9e3c0a80fd1e6c0281c8d8ab1bbf3ec0fb56299940b] <==
	I1101 10:19:15.398438       1 serving.go:348] Generated self-signed cert in-memory
	I1101 10:19:17.268557       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 10:19:17.268583       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:19:17.272532       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1101 10:19:17.272556       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:19:17.272569       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1101 10:19:17.272581       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 10:19:17.272588       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:19:17.272607       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 10:19:17.274620       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 10:19:17.274681       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 10:19:17.373518       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 10:19:17.373591       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1101 10:19:17.373523       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:19:30 old-k8s-version-556573 kubelet[724]: I1101 10:19:30.207931     724 topology_manager.go:215] "Topology Admit Handler" podUID="a38386b4-80d8-4037-8ca8-f9885dd37c2d" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-xdjzs"
	Nov 01 10:19:30 old-k8s-version-556573 kubelet[724]: I1101 10:19:30.321008     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a38386b4-80d8-4037-8ca8-f9885dd37c2d-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-xdjzs\" (UID: \"a38386b4-80d8-4037-8ca8-f9885dd37c2d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs"
	Nov 01 10:19:30 old-k8s-version-556573 kubelet[724]: I1101 10:19:30.321077     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp8b4\" (UniqueName: \"kubernetes.io/projected/a38386b4-80d8-4037-8ca8-f9885dd37c2d-kube-api-access-rp8b4\") pod \"dashboard-metrics-scraper-5f989dc9cf-xdjzs\" (UID: \"a38386b4-80d8-4037-8ca8-f9885dd37c2d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs"
	Nov 01 10:19:30 old-k8s-version-556573 kubelet[724]: I1101 10:19:30.321189     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9cjn\" (UniqueName: \"kubernetes.io/projected/5b1c4fe0-25e6-40ca-989f-123a98c5db4c-kube-api-access-d9cjn\") pod \"kubernetes-dashboard-8694d4445c-wrwks\" (UID: \"5b1c4fe0-25e6-40ca-989f-123a98c5db4c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wrwks"
	Nov 01 10:19:30 old-k8s-version-556573 kubelet[724]: I1101 10:19:30.321242     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5b1c4fe0-25e6-40ca-989f-123a98c5db4c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-wrwks\" (UID: \"5b1c4fe0-25e6-40ca-989f-123a98c5db4c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wrwks"
	Nov 01 10:19:33 old-k8s-version-556573 kubelet[724]: I1101 10:19:33.309210     724 scope.go:117] "RemoveContainer" containerID="bde06aead925fe64085c895a5eb0c5c67f24a46a77928cbf06e2e46734e7ef37"
	Nov 01 10:19:34 old-k8s-version-556573 kubelet[724]: I1101 10:19:34.313899     724 scope.go:117] "RemoveContainer" containerID="bde06aead925fe64085c895a5eb0c5c67f24a46a77928cbf06e2e46734e7ef37"
	Nov 01 10:19:34 old-k8s-version-556573 kubelet[724]: I1101 10:19:34.314097     724 scope.go:117] "RemoveContainer" containerID="9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12"
	Nov 01 10:19:34 old-k8s-version-556573 kubelet[724]: E1101 10:19:34.314475     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xdjzs_kubernetes-dashboard(a38386b4-80d8-4037-8ca8-f9885dd37c2d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs" podUID="a38386b4-80d8-4037-8ca8-f9885dd37c2d"
	Nov 01 10:19:35 old-k8s-version-556573 kubelet[724]: I1101 10:19:35.318301     724 scope.go:117] "RemoveContainer" containerID="9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12"
	Nov 01 10:19:35 old-k8s-version-556573 kubelet[724]: E1101 10:19:35.318729     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xdjzs_kubernetes-dashboard(a38386b4-80d8-4037-8ca8-f9885dd37c2d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs" podUID="a38386b4-80d8-4037-8ca8-f9885dd37c2d"
	Nov 01 10:19:37 old-k8s-version-556573 kubelet[724]: I1101 10:19:37.337679     724 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wrwks" podStartSLOduration=1.0881186999999999 podCreationTimestamp="2025-11-01 10:19:30 +0000 UTC" firstStartedPulling="2025-11-01 10:19:30.533894395 +0000 UTC m=+16.434392453" lastFinishedPulling="2025-11-01 10:19:36.783362149 +0000 UTC m=+22.683860207" observedRunningTime="2025-11-01 10:19:37.337032118 +0000 UTC m=+23.237530185" watchObservedRunningTime="2025-11-01 10:19:37.337586454 +0000 UTC m=+23.238084519"
	Nov 01 10:19:40 old-k8s-version-556573 kubelet[724]: I1101 10:19:40.510756     724 scope.go:117] "RemoveContainer" containerID="9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12"
	Nov 01 10:19:40 old-k8s-version-556573 kubelet[724]: E1101 10:19:40.511091     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xdjzs_kubernetes-dashboard(a38386b4-80d8-4037-8ca8-f9885dd37c2d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs" podUID="a38386b4-80d8-4037-8ca8-f9885dd37c2d"
	Nov 01 10:19:49 old-k8s-version-556573 kubelet[724]: I1101 10:19:49.357475     724 scope.go:117] "RemoveContainer" containerID="8fd6240f85ba7e33bc3cd42db7e4ecfbef506ccc7d5709f3945a260b4406ba64"
	Nov 01 10:19:52 old-k8s-version-556573 kubelet[724]: I1101 10:19:52.212312     724 scope.go:117] "RemoveContainer" containerID="9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12"
	Nov 01 10:19:52 old-k8s-version-556573 kubelet[724]: I1101 10:19:52.369450     724 scope.go:117] "RemoveContainer" containerID="9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12"
	Nov 01 10:19:52 old-k8s-version-556573 kubelet[724]: I1101 10:19:52.370091     724 scope.go:117] "RemoveContainer" containerID="1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc"
	Nov 01 10:19:52 old-k8s-version-556573 kubelet[724]: E1101 10:19:52.370595     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xdjzs_kubernetes-dashboard(a38386b4-80d8-4037-8ca8-f9885dd37c2d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs" podUID="a38386b4-80d8-4037-8ca8-f9885dd37c2d"
	Nov 01 10:20:00 old-k8s-version-556573 kubelet[724]: I1101 10:20:00.510233     724 scope.go:117] "RemoveContainer" containerID="1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc"
	Nov 01 10:20:00 old-k8s-version-556573 kubelet[724]: E1101 10:20:00.510712     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xdjzs_kubernetes-dashboard(a38386b4-80d8-4037-8ca8-f9885dd37c2d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs" podUID="a38386b4-80d8-4037-8ca8-f9885dd37c2d"
	Nov 01 10:20:03 old-k8s-version-556573 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:20:03 old-k8s-version-556573 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:20:03 old-k8s-version-556573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:20:03 old-k8s-version-556573 systemd[1]: kubelet.service: Consumed 1.575s CPU time.
	
	
	==> kubernetes-dashboard [60c3ea523dc7210a6abdb204c3151d0227b798a7fb181e25b264e4e9037ad6a7] <==
	2025/11/01 10:19:36 Using namespace: kubernetes-dashboard
	2025/11/01 10:19:36 Using in-cluster config to connect to apiserver
	2025/11/01 10:19:36 Using secret token for csrf signing
	2025/11/01 10:19:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:19:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:19:36 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 10:19:36 Generating JWE encryption key
	2025/11/01 10:19:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:19:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:19:36 Initializing JWE encryption key from synchronized object
	2025/11/01 10:19:36 Creating in-cluster Sidecar client
	2025/11/01 10:19:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:19:36 Serving insecurely on HTTP port: 9090
	2025/11/01 10:20:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:19:36 Starting overwatch
	
	
	==> storage-provisioner [8fd6240f85ba7e33bc3cd42db7e4ecfbef506ccc7d5709f3945a260b4406ba64] <==
	I1101 10:19:18.587712       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:19:48.590065       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [eb353e58c0fc17fac5140bb533292ff0eede9c2a117a3f00b2eda7320c1197f4] <==
	I1101 10:19:49.414095       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:19:49.421742       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:19:49.421783       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 10:20:06.817774       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:20:06.817919       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa58e27b-5340-4f47-971d-25a668ca76a2", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-556573_44a8a45b-0546-46bb-bacd-dc3136e956e8 became leader
	I1101 10:20:06.818016       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-556573_44a8a45b-0546-46bb-bacd-dc3136e956e8!
	I1101 10:20:06.918262       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-556573_44a8a45b-0546-46bb-bacd-dc3136e956e8!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556573 -n old-k8s-version-556573
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556573 -n old-k8s-version-556573: exit status 2 (365.624294ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-556573 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-556573
helpers_test.go:243: (dbg) docker inspect old-k8s-version-556573:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e",
	        "Created": "2025-11-01T10:17:54.292571852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 750211,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:19:07.739790612Z",
	            "FinishedAt": "2025-11-01T10:19:06.818303299Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e/hostname",
	        "HostsPath": "/var/lib/docker/containers/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e/hosts",
	        "LogPath": "/var/lib/docker/containers/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e/fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e-json.log",
	        "Name": "/old-k8s-version-556573",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-556573:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-556573",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fa365e4464f7b62272e877e5e1a88a86fef044c6d6fcb3418080aff2a718bc1e",
	                "LowerDir": "/var/lib/docker/overlay2/4facf36bf2fbf14ccb684b9dadf34edcc1aafb1047e6fddc098a6134e0e1cc98-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4facf36bf2fbf14ccb684b9dadf34edcc1aafb1047e6fddc098a6134e0e1cc98/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4facf36bf2fbf14ccb684b9dadf34edcc1aafb1047e6fddc098a6134e0e1cc98/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4facf36bf2fbf14ccb684b9dadf34edcc1aafb1047e6fddc098a6134e0e1cc98/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-556573",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-556573/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-556573",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-556573",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-556573",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cfe511a51a60770a4c992ec00dc1dff029279ab332cf23f8c0d746dfc58b1eb2",
	            "SandboxKey": "/var/run/docker/netns/cfe511a51a60",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-556573": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:ca:f3:37:6b:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bbcdd55cf2cbe101dd2954fd5b3da9010f13fa5cf479e04754b13ce474d6499d",
	                    "EndpointID": "f5be21f45395ab78586dd177e73d3bc3a43db69f10edffc88406a8ab2be4529c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-556573",
	                        "fa365e4464f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556573 -n old-k8s-version-556573
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556573 -n old-k8s-version-556573: exit status 2 (374.343643ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-556573 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-556573 logs -n 25: (1.203551083s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cert-options-278823                                                                                                                                                                                                                        │ cert-options-278823       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p force-systemd-flag-767379 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ delete  │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p NoKubernetes-194729 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ stop    │ -p kubernetes-upgrade-949166                                                                                                                                                                                                                  │ kubernetes-upgrade-949166 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-949166 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p NoKubernetes-194729 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ stop    │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p NoKubernetes-194729 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ ssh     │ -p NoKubernetes-194729 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ delete  │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:18 UTC │
	│ ssh     │ force-systemd-flag-767379 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ delete  │ -p force-systemd-flag-767379                                                                                                                                                                                                                  │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-556573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ stop    │ -p old-k8s-version-556573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-680879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ stop    │ -p no-preload-680879 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-556573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p no-preload-680879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ old-k8s-version-556573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p old-k8s-version-556573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:19:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:19:13.906369  751704 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:19:13.906696  751704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:19:13.906713  751704 out.go:374] Setting ErrFile to fd 2...
	I1101 10:19:13.906720  751704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:19:13.907015  751704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:19:13.907484  751704 out.go:368] Setting JSON to false
	I1101 10:19:13.908829  751704 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10891,"bootTime":1761981463,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:19:13.908989  751704 start.go:143] virtualization: kvm guest
	I1101 10:19:13.910871  751704 out.go:179] * [no-preload-680879] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:19:13.912111  751704 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:19:13.912137  751704 notify.go:221] Checking for updates...
	I1101 10:19:13.914183  751704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:19:13.915953  751704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:19:13.917094  751704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:19:13.918344  751704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:19:13.919394  751704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:19:13.921049  751704 config.go:182] Loaded profile config "no-preload-680879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:19:13.921752  751704 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:19:13.949759  751704 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:19:13.949923  751704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:19:14.026278  751704 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:19:14.014732237 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:19:14.026395  751704 docker.go:319] overlay module found
	I1101 10:19:14.028147  751704 out.go:179] * Using the docker driver based on existing profile
	I1101 10:19:14.029450  751704 start.go:309] selected driver: docker
	I1101 10:19:14.029471  751704 start.go:930] validating driver "docker" against &{Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:19:14.029573  751704 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:19:14.030242  751704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:19:14.099496  751704 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:19:14.087804922 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:19:14.099911  751704 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:19:14.099949  751704 cni.go:84] Creating CNI manager for ""
	I1101 10:19:14.100023  751704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:19:14.100075  751704 start.go:353] cluster config:
	{Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:19:14.102954  751704 out.go:179] * Starting "no-preload-680879" primary control-plane node in "no-preload-680879" cluster
	I1101 10:19:14.104054  751704 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:19:14.105351  751704 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:19:14.106399  751704 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:19:14.106532  751704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:19:14.106600  751704 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/config.json ...
	I1101 10:19:14.106728  751704 cache.go:107] acquiring lock: {Name:mke74377eb8e8f0a2186d46bf4bdde02a944c052 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106786  751704 cache.go:107] acquiring lock: {Name:mke846f8ed0eae3f666a2c55755500ad865ceb9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106802  751704 cache.go:107] acquiring lock: {Name:mk54c640473c09dfff1239ead2dd2d51481a015a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106868  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 10:19:14.106881  751704 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 172.118µs
	I1101 10:19:14.106892  751704 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 10:19:14.106892  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1101 10:19:14.106823  751704 cache.go:107] acquiring lock: {Name:mk1c05d679d90243f04dc9223673738f53287a15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106918  751704 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 123.988µs
	I1101 10:19:14.106917  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1101 10:19:14.106928  751704 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1101 10:19:14.106921  751704 cache.go:107] acquiring lock: {Name:mke53a0d558f57413c985e8c7d551691237ca10b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106924  751704 cache.go:107] acquiring lock: {Name:mkf19fdae2c3486652a390b24771bb4742a08787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106934  751704 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 169.637µs
	I1101 10:19:14.106958  751704 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1101 10:19:14.106747  751704 cache.go:107] acquiring lock: {Name:mka96111944f8dc8ebfdcd94de79dafd069ca1d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.106975  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1101 10:19:14.106980  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1101 10:19:14.106987  751704 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 79.806µs
	I1101 10:19:14.106988  751704 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 69.795µs
	I1101 10:19:14.106996  751704 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1101 10:19:14.107002  751704 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1101 10:19:14.106956  751704 cache.go:107] acquiring lock: {Name:mkcd303cc659630879e706aba8fe46f709be28e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.107028  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1101 10:19:14.107028  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1101 10:19:14.107040  751704 cache.go:115] /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1101 10:19:14.107038  751704 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 317.209µs
	I1101 10:19:14.107049  751704 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1101 10:19:14.107048  751704 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 102.264µs
	I1101 10:19:14.107042  751704 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 269.507µs
	I1101 10:19:14.107056  751704 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1101 10:19:14.107058  751704 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1101 10:19:14.107067  751704 cache.go:87] Successfully saved all images to host disk.
	I1101 10:19:14.132517  751704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:19:14.132546  751704 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:19:14.132570  751704 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:19:14.132608  751704 start.go:360] acquireMachinesLock for no-preload-680879: {Name:mkb2bd3a5c4fc957e021ade32b7982a68330a2bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:19:14.132679  751704 start.go:364] duration metric: took 48.539µs to acquireMachinesLock for "no-preload-680879"
	I1101 10:19:14.132703  751704 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:19:14.132711  751704 fix.go:54] fixHost starting: 
	I1101 10:19:14.133012  751704 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:19:14.156778  751704 fix.go:112] recreateIfNeeded on no-preload-680879: state=Stopped err=<nil>
	W1101 10:19:14.156819  751704 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:19:09.855370  734517 cri.go:89] found id: ""
	I1101 10:19:09.855400  734517 logs.go:282] 0 containers: []
	W1101 10:19:09.855411  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:09.855418  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:09.855471  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:09.885995  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:09.886022  734517 cri.go:89] found id: "5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed"
	I1101 10:19:09.886026  734517 cri.go:89] found id: ""
	I1101 10:19:09.886036  734517 logs.go:282] 2 containers: [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99 5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed]
	I1101 10:19:09.886097  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:09.890892  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:09.895212  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:09.895276  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:09.925925  734517 cri.go:89] found id: ""
	I1101 10:19:09.925964  734517 logs.go:282] 0 containers: []
	W1101 10:19:09.925974  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:09.925983  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:09.926064  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:09.957057  734517 cri.go:89] found id: ""
	I1101 10:19:09.957091  734517 logs.go:282] 0 containers: []
	W1101 10:19:09.957102  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:09.957119  734517 logs.go:123] Gathering logs for kube-controller-manager [5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed] ...
	I1101 10:19:09.957132  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed"
	I1101 10:19:09.987088  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:09.987120  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:10.029318  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:10.029372  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:10.068546  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:10.068593  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:10.140318  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:10.140368  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:10.206671  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:10.206699  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:10.206719  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:10.254465  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:10.254506  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:10.274210  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:10.274254  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:10.310826  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:10.310887  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:12.841952  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:12.842503  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:12.842563  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:12.842610  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:12.876012  734517 cri.go:89] found id: "294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:12.876047  734517 cri.go:89] found id: ""
	I1101 10:19:12.876060  734517 logs.go:282] 1 containers: [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7]
	I1101 10:19:12.876121  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:12.880716  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:12.880798  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:12.911534  734517 cri.go:89] found id: ""
	I1101 10:19:12.911561  734517 logs.go:282] 0 containers: []
	W1101 10:19:12.911569  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:12.911575  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:12.911635  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:12.949287  734517 cri.go:89] found id: ""
	I1101 10:19:12.949314  734517 logs.go:282] 0 containers: []
	W1101 10:19:12.949323  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:12.949329  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:12.949387  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:12.978640  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:12.978670  734517 cri.go:89] found id: ""
	I1101 10:19:12.978683  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:12.978760  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:12.983393  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:12.983462  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:13.015887  734517 cri.go:89] found id: ""
	I1101 10:19:13.015917  734517 logs.go:282] 0 containers: []
	W1101 10:19:13.015928  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:13.015937  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:13.016057  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:13.054914  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:13.055006  734517 cri.go:89] found id: "5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed"
	I1101 10:19:13.055015  734517 cri.go:89] found id: ""
	I1101 10:19:13.055026  734517 logs.go:282] 2 containers: [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99 5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed]
	I1101 10:19:13.055100  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:13.059806  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:13.064258  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:13.064335  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:13.094414  734517 cri.go:89] found id: ""
	I1101 10:19:13.094443  734517 logs.go:282] 0 containers: []
	W1101 10:19:13.094454  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:13.094462  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:13.094536  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:13.126617  734517 cri.go:89] found id: ""
	I1101 10:19:13.126659  734517 logs.go:282] 0 containers: []
	W1101 10:19:13.126677  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:13.126708  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:13.126724  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:13.181917  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:13.181967  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:13.222519  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:13.222550  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:13.298526  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:13.298568  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:13.319609  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:13.319661  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:13.390332  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:13.390362  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:13.390382  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:13.432147  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:13.432197  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:13.484294  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:13.484343  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:13.518497  734517 logs.go:123] Gathering logs for kube-controller-manager [5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed] ...
	I1101 10:19:13.518526  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a37316e6802f2d195ead2c6f574606260c5bd5e54ca842230228194751950ed"
	I1101 10:19:13.706315  749992 cli_runner.go:164] Run: docker network inspect old-k8s-version-556573 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:19:13.726524  749992 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 10:19:13.731452  749992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:19:13.743248  749992 kubeadm.go:884] updating cluster {Name:old-k8s-version-556573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-556573 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:19:13.743417  749992 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:19:13.743467  749992 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:19:13.785358  749992 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:19:13.785386  749992 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:19:13.785443  749992 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:19:13.816610  749992 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:19:13.816636  749992 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:19:13.816645  749992 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1101 10:19:13.816786  749992 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-556573 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-556573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:19:13.816910  749992 ssh_runner.go:195] Run: crio config
	I1101 10:19:13.872019  749992 cni.go:84] Creating CNI manager for ""
	I1101 10:19:13.872068  749992 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:19:13.872112  749992 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:19:13.872155  749992 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-556573 NodeName:old-k8s-version-556573 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:19:13.872724  749992 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-556573"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:19:13.872809  749992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 10:19:13.882622  749992 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:19:13.882694  749992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:19:13.892412  749992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 10:19:13.908682  749992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:19:13.924825  749992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1101 10:19:13.942231  749992 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:19:13.947571  749992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:19:13.960716  749992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:19:14.068595  749992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:19:14.096121  749992 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573 for IP: 192.168.94.2
	I1101 10:19:14.096152  749992 certs.go:195] generating shared ca certs ...
	I1101 10:19:14.096176  749992 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:14.096422  749992 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:19:14.096488  749992 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:19:14.096506  749992 certs.go:257] generating profile certs ...
	I1101 10:19:14.096639  749992 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.key
	I1101 10:19:14.096727  749992 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key.91d3229f
	I1101 10:19:14.096783  749992 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.key
	I1101 10:19:14.096956  749992 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:19:14.097006  749992 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:19:14.097022  749992 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:19:14.097051  749992 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:19:14.097086  749992 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:19:14.097116  749992 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:19:14.097166  749992 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:19:14.097933  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:19:14.122097  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:19:14.146186  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:19:14.171424  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:19:14.199388  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:19:14.227146  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:19:14.248660  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:19:14.272317  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:19:14.301998  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:19:14.333403  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:19:14.354467  749992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:19:14.375874  749992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:19:14.391454  749992 ssh_runner.go:195] Run: openssl version
	I1101 10:19:14.400020  749992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:19:14.410531  749992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:19:14.415311  749992 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:19:14.415382  749992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:19:14.460172  749992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:19:14.472376  749992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:19:14.483536  749992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:19:14.488585  749992 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:19:14.488680  749992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:19:14.533215  749992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:19:14.544014  749992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:19:14.554184  749992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:19:14.558978  749992 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:19:14.559057  749992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:19:14.601539  749992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:19:14.611265  749992 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:19:14.616160  749992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:19:14.665063  749992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:19:14.723667  749992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:19:14.780955  749992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:19:14.842737  749992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:19:14.887691  749992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:19:14.929915  749992 kubeadm.go:401] StartCluster: {Name:old-k8s-version-556573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-556573 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:19:14.930067  749992 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:19:14.930158  749992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:19:14.969523  749992 cri.go:89] found id: "f7ba02ac9362802eef20c5f8870a35d429e636eb86c22620f260caf726977133"
	I1101 10:19:14.969557  749992 cri.go:89] found id: "898589e23f303c22d96fcb1dea82d386d8e8ed945f8c83a07c7f63c935471dbd"
	I1101 10:19:14.969562  749992 cri.go:89] found id: "def0c7222196bef86484e9e3c0a80fd1e6c0281c8d8ab1bbf3ec0fb56299940b"
	I1101 10:19:14.969568  749992 cri.go:89] found id: "34df676c07e5e1c97b53a43963c2ebbd436e0bd1bf7587e9f70aea3ccac71699"
	I1101 10:19:14.969572  749992 cri.go:89] found id: ""
	I1101 10:19:14.969624  749992 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:19:14.984310  749992 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:19:14Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:19:14.984386  749992 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:19:14.995019  749992 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:19:14.995046  749992 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:19:14.995096  749992 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:19:15.005083  749992 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:19:15.005942  749992 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-556573" does not appear in /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:19:15.006345  749992 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-514161/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-556573" cluster setting kubeconfig missing "old-k8s-version-556573" context setting]
	I1101 10:19:15.006965  749992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:15.008856  749992 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:19:15.018284  749992 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1101 10:19:15.018329  749992 kubeadm.go:602] duration metric: took 23.275022ms to restartPrimaryControlPlane
	I1101 10:19:15.018342  749992 kubeadm.go:403] duration metric: took 88.447176ms to StartCluster
	I1101 10:19:15.018362  749992 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:15.018444  749992 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:19:15.019454  749992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:15.019729  749992 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:19:15.019806  749992 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:19:15.019931  749992 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-556573"
	I1101 10:19:15.019968  749992 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-556573"
	W1101 10:19:15.019980  749992 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:19:15.020001  749992 config.go:182] Loaded profile config "old-k8s-version-556573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:19:15.020026  749992 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-556573"
	I1101 10:19:15.020012  749992 host.go:66] Checking if "old-k8s-version-556573" exists ...
	I1101 10:19:15.020057  749992 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-556573"
	I1101 10:19:15.020004  749992 addons.go:70] Setting dashboard=true in profile "old-k8s-version-556573"
	I1101 10:19:15.020114  749992 addons.go:239] Setting addon dashboard=true in "old-k8s-version-556573"
	W1101 10:19:15.020125  749992 addons.go:248] addon dashboard should already be in state true
	I1101 10:19:15.020159  749992 host.go:66] Checking if "old-k8s-version-556573" exists ...
	I1101 10:19:15.020401  749992 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:19:15.020578  749992 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:19:15.020658  749992 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:19:15.024738  749992 out.go:179] * Verifying Kubernetes components...
	I1101 10:19:15.026339  749992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:19:15.047381  749992 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-556573"
	W1101 10:19:15.047412  749992 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:19:15.047445  749992 host.go:66] Checking if "old-k8s-version-556573" exists ...
	I1101 10:19:15.047967  749992 cli_runner.go:164] Run: docker container inspect old-k8s-version-556573 --format={{.State.Status}}
	I1101 10:19:15.048128  749992 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:19:15.049318  749992 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:19:15.049364  749992 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:19:15.049382  749992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:19:15.049447  749992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:19:15.051540  749992 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:19:15.053825  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:19:15.053868  749992 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:19:15.053951  749992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:19:15.076026  749992 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:19:15.076054  749992 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:19:15.076121  749992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556573
	I1101 10:19:15.081981  749992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:19:15.090592  749992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:19:15.107213  749992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/old-k8s-version-556573/id_rsa Username:docker}
	I1101 10:19:15.184207  749992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:19:15.201303  749992 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-556573" to be "Ready" ...
	I1101 10:19:15.211343  749992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:19:15.221660  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:19:15.221771  749992 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:19:15.235476  749992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:19:15.243708  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:19:15.243750  749992 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:19:15.263411  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:19:15.263447  749992 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:19:15.283814  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:19:15.283865  749992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:19:15.302435  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:19:15.302463  749992 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:19:15.319985  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:19:15.320026  749992 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:19:15.336028  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:19:15.336058  749992 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:19:15.352358  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:19:15.352400  749992 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:19:15.368234  749992 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:19:15.368266  749992 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:19:15.383330  749992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:19:17.246248  749992 node_ready.go:49] node "old-k8s-version-556573" is "Ready"
	I1101 10:19:17.246302  749992 node_ready.go:38] duration metric: took 2.044967908s for node "old-k8s-version-556573" to be "Ready" ...
	I1101 10:19:17.246323  749992 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:19:17.246395  749992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:19:17.939894  749992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.728461253s)
	I1101 10:19:17.939984  749992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.704466481s)
	I1101 10:19:18.309222  749992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.925834389s)
	I1101 10:19:18.309268  749992 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.062847788s)
	I1101 10:19:18.309289  749992 api_server.go:72] duration metric: took 3.289529128s to wait for apiserver process to appear ...
	I1101 10:19:18.309295  749992 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:19:18.309317  749992 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 10:19:18.310675  749992 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-556573 addons enable metrics-server
	
	I1101 10:19:18.312581  749992 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 10:19:14.158542  751704 out.go:252] * Restarting existing docker container for "no-preload-680879" ...
	I1101 10:19:14.158664  751704 cli_runner.go:164] Run: docker start no-preload-680879
	I1101 10:19:14.451848  751704 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:19:14.473899  751704 kic.go:430] container "no-preload-680879" state is running.
	I1101 10:19:14.474323  751704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-680879
	I1101 10:19:14.494893  751704 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/config.json ...
	I1101 10:19:14.495209  751704 machine.go:94] provisionDockerMachine start ...
	I1101 10:19:14.495304  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:14.516210  751704 main.go:143] libmachine: Using SSH client type: native
	I1101 10:19:14.516592  751704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1101 10:19:14.516612  751704 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:19:14.517488  751704 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40124->127.0.0.1:33188: read: connection reset by peer
	I1101 10:19:17.671083  751704 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-680879
	
	I1101 10:19:17.671116  751704 ubuntu.go:182] provisioning hostname "no-preload-680879"
	I1101 10:19:17.671183  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:17.693711  751704 main.go:143] libmachine: Using SSH client type: native
	I1101 10:19:17.694046  751704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1101 10:19:17.694069  751704 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-680879 && echo "no-preload-680879" | sudo tee /etc/hostname
	I1101 10:19:17.865511  751704 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-680879
	
	I1101 10:19:17.865598  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:17.885170  751704 main.go:143] libmachine: Using SSH client type: native
	I1101 10:19:17.885510  751704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1101 10:19:17.885535  751704 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-680879' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-680879/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-680879' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:19:18.039391  751704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:19:18.039441  751704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:19:18.039471  751704 ubuntu.go:190] setting up certificates
	I1101 10:19:18.039488  751704 provision.go:84] configureAuth start
	I1101 10:19:18.039556  751704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-680879
	I1101 10:19:18.060079  751704 provision.go:143] copyHostCerts
	I1101 10:19:18.060161  751704 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:19:18.060186  751704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:19:18.060285  751704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:19:18.060447  751704 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:19:18.060461  751704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:19:18.060504  751704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:19:18.060591  751704 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:19:18.060603  751704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:19:18.060641  751704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:19:18.060713  751704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.no-preload-680879 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-680879]
	I1101 10:19:18.373054  751704 provision.go:177] copyRemoteCerts
	I1101 10:19:18.373135  751704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:19:18.373202  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:18.396141  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:18.506746  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:19:18.535033  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:19:18.566780  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:19:18.594786  751704 provision.go:87] duration metric: took 555.279346ms to configureAuth
	I1101 10:19:18.594824  751704 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:19:18.595042  751704 config.go:182] Loaded profile config "no-preload-680879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:19:18.595177  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:18.616703  751704 main.go:143] libmachine: Using SSH client type: native
	I1101 10:19:18.616951  751704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1101 10:19:18.616972  751704 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:19:16.064474  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:16.065057  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:16.065118  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:16.065173  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:16.097289  734517 cri.go:89] found id: "294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:16.097313  734517 cri.go:89] found id: ""
	I1101 10:19:16.097324  734517 logs.go:282] 1 containers: [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7]
	I1101 10:19:16.097390  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:16.102090  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:16.102169  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:16.133466  734517 cri.go:89] found id: ""
	I1101 10:19:16.133501  734517 logs.go:282] 0 containers: []
	W1101 10:19:16.133511  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:16.133519  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:16.133585  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:16.164076  734517 cri.go:89] found id: ""
	I1101 10:19:16.164104  734517 logs.go:282] 0 containers: []
	W1101 10:19:16.164113  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:16.164120  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:16.164181  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:16.197390  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:16.197420  734517 cri.go:89] found id: ""
	I1101 10:19:16.197432  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:16.197502  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:16.202249  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:16.202319  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:16.237786  734517 cri.go:89] found id: ""
	I1101 10:19:16.237821  734517 logs.go:282] 0 containers: []
	W1101 10:19:16.237832  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:16.237867  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:16.237931  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:16.271050  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:16.271077  734517 cri.go:89] found id: ""
	I1101 10:19:16.271088  734517 logs.go:282] 1 containers: [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:16.271232  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:16.276136  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:16.276226  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:16.309952  734517 cri.go:89] found id: ""
	I1101 10:19:16.309981  734517 logs.go:282] 0 containers: []
	W1101 10:19:16.309989  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:16.309995  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:16.310077  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:16.346364  734517 cri.go:89] found id: ""
	I1101 10:19:16.346402  734517 logs.go:282] 0 containers: []
	W1101 10:19:16.346414  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:16.346429  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:16.346447  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:16.429966  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:16.430014  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:16.453622  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:16.453662  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:16.524270  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:16.524299  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:16.524317  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:16.563420  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:16.563474  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:16.622109  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:16.622152  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:16.656486  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:16.656525  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:16.708697  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:16.708750  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:19.247958  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:19.248479  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:19.248544  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:19.248609  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:19.292223  734517 cri.go:89] found id: "294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:19.292305  734517 cri.go:89] found id: ""
	I1101 10:19:19.292318  734517 logs.go:282] 1 containers: [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7]
	I1101 10:19:19.292379  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:19.298153  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:19.298252  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:19.334336  734517 cri.go:89] found id: ""
	I1101 10:19:19.334364  734517 logs.go:282] 0 containers: []
	W1101 10:19:19.334372  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:19.334379  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:19.334425  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:19.368799  734517 cri.go:89] found id: ""
	I1101 10:19:19.368831  734517 logs.go:282] 0 containers: []
	W1101 10:19:19.368852  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:19.368861  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:19.368922  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:19.404579  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:19.404611  734517 cri.go:89] found id: ""
	I1101 10:19:19.404623  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:19.404693  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:19.409229  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:19.409312  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:19.439614  734517 cri.go:89] found id: ""
	I1101 10:19:19.439649  734517 logs.go:282] 0 containers: []
	W1101 10:19:19.439660  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:19.439668  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:19.439739  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:19.471181  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:19.471207  734517 cri.go:89] found id: ""
	I1101 10:19:19.471218  734517 logs.go:282] 1 containers: [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:19.471275  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:19.475921  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:19.475991  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:19.506647  734517 cri.go:89] found id: ""
	I1101 10:19:19.506677  734517 logs.go:282] 0 containers: []
	W1101 10:19:19.506686  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:19.506692  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:19.506764  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:19.538745  734517 cri.go:89] found id: ""
	I1101 10:19:19.538781  734517 logs.go:282] 0 containers: []
	W1101 10:19:19.538793  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:19.538807  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:19.538820  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:19.619331  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:19.619435  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:19.642129  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:19.642175  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:19.707798  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:19.707820  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:19.707871  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:19.748329  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:19.748362  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:19.797120  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:19.797153  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:19.828136  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:19.828177  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:18.954445  751704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:19:18.954488  751704 machine.go:97] duration metric: took 4.459254718s to provisionDockerMachine
	I1101 10:19:18.954505  751704 start.go:293] postStartSetup for "no-preload-680879" (driver="docker")
	I1101 10:19:18.954520  751704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:19:18.954592  751704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:19:18.954646  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:18.975955  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:19.081641  751704 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:19:19.085894  751704 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:19:19.085933  751704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:19:19.085946  751704 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:19:19.086013  751704 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:19:19.086087  751704 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:19:19.086178  751704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:19:19.095576  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:19:19.115984  751704 start.go:296] duration metric: took 161.458399ms for postStartSetup
	I1101 10:19:19.116064  751704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:19:19.116107  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:19.134184  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:19.234946  751704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:19:19.240054  751704 fix.go:56] duration metric: took 5.107333091s for fixHost
	I1101 10:19:19.240087  751704 start.go:83] releasing machines lock for "no-preload-680879", held for 5.1073946s
	I1101 10:19:19.240161  751704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-680879
	I1101 10:19:19.262761  751704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:19:19.262795  751704 ssh_runner.go:195] Run: cat /version.json
	I1101 10:19:19.262868  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:19.262881  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:19.289084  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:19.289094  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:19.459262  751704 ssh_runner.go:195] Run: systemctl --version
	I1101 10:19:19.467531  751704 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:19:19.508897  751704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:19:19.514015  751704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:19:19.514091  751704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:19:19.523940  751704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:19:19.523965  751704 start.go:496] detecting cgroup driver to use...
	I1101 10:19:19.524001  751704 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:19:19.524047  751704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:19:19.541745  751704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:19:19.556237  751704 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:19:19.556316  751704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:19:19.574192  751704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:19:19.588810  751704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:19:19.683130  751704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:19:19.780941  751704 docker.go:234] disabling docker service ...
	I1101 10:19:19.781011  751704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:19:19.796483  751704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:19:19.810507  751704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:19:19.917005  751704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:19:20.007056  751704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:19:20.021468  751704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:19:20.037147  751704 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:19:20.037206  751704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.047599  751704 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:19:20.047677  751704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.058531  751704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.069246  751704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.079292  751704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:19:20.088398  751704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.098679  751704 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.110455  751704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:19:20.120893  751704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:19:20.129381  751704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:19:20.138135  751704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:19:20.226092  751704 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:19:20.346828  751704 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:19:20.346919  751704 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:19:20.351801  751704 start.go:564] Will wait 60s for crictl version
	I1101 10:19:20.351876  751704 ssh_runner.go:195] Run: which crictl
	I1101 10:19:20.356247  751704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:19:20.384685  751704 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:19:20.384783  751704 ssh_runner.go:195] Run: crio --version
	I1101 10:19:20.415698  751704 ssh_runner.go:195] Run: crio --version
	I1101 10:19:20.447467  751704 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:19:20.448398  751704 cli_runner.go:164] Run: docker network inspect no-preload-680879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:19:20.466053  751704 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:19:20.470688  751704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:19:20.482429  751704 kubeadm.go:884] updating cluster {Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:19:20.482569  751704 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:19:20.482613  751704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:19:20.516114  751704 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:19:20.516138  751704 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:19:20.516146  751704 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:19:20.516264  751704 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-680879 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:19:20.516329  751704 ssh_runner.go:195] Run: crio config
	I1101 10:19:20.565114  751704 cni.go:84] Creating CNI manager for ""
	I1101 10:19:20.565138  751704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:19:20.565159  751704 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:19:20.565183  751704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-680879 NodeName:no-preload-680879 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:19:20.565324  751704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-680879"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:19:20.565388  751704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:19:20.574785  751704 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:19:20.574892  751704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:19:20.583796  751704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:19:20.598416  751704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:19:20.611988  751704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1101 10:19:20.625018  751704 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:19:20.629192  751704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:19:20.640027  751704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:19:20.724122  751704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:19:20.750501  751704 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879 for IP: 192.168.85.2
	I1101 10:19:20.750536  751704 certs.go:195] generating shared ca certs ...
	I1101 10:19:20.750569  751704 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:20.750745  751704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:19:20.750800  751704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:19:20.750813  751704 certs.go:257] generating profile certs ...
	I1101 10:19:20.750949  751704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.key
	I1101 10:19:20.751023  751704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key.0ccb300d
	I1101 10:19:20.751079  751704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.key
	I1101 10:19:20.751235  751704 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:19:20.751276  751704 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:19:20.751289  751704 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:19:20.751321  751704 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:19:20.751356  751704 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:19:20.751388  751704 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:19:20.751444  751704 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:19:20.752339  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:19:20.772518  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:19:20.793515  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:19:20.815510  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:19:20.839357  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:19:20.861083  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:19:20.881889  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:19:20.902415  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:19:20.923281  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:19:20.945512  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:19:20.967695  751704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:19:20.989326  751704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:19:21.005316  751704 ssh_runner.go:195] Run: openssl version
	I1101 10:19:21.012429  751704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:19:21.023160  751704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:19:21.027812  751704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:19:21.027916  751704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:19:21.066176  751704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:19:21.076944  751704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:19:21.087446  751704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:19:21.092261  751704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:19:21.092351  751704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:19:21.129032  751704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:19:21.139051  751704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:19:21.149537  751704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:19:21.154578  751704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:19:21.154648  751704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:19:21.193050  751704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:19:21.203218  751704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:19:21.208012  751704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:19:21.245512  751704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:19:21.295248  751704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:19:21.335754  751704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:19:21.381868  751704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:19:21.440668  751704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:19:21.502709  751704 kubeadm.go:401] StartCluster: {Name:no-preload-680879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-680879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:19:21.502902  751704 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:19:21.502985  751704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:19:21.538378  751704 cri.go:89] found id: "6fe1794e14c177d264a3e5610bef578069b247e5deb7054c93fb9a70b2ccf7ba"
	I1101 10:19:21.538406  751704 cri.go:89] found id: "a1a084abd5f06aa1899bd7372a8496c6c8eb79b98488279f9c9679a6c0338270"
	I1101 10:19:21.538412  751704 cri.go:89] found id: "8a355ad3dea63414c9311a3f417e38b58b4c399b8aa2b4497aea7e6cd9510af8"
	I1101 10:19:21.538418  751704 cri.go:89] found id: "be916f84dfad93d8e52891dd7a642ef5783afd3b0e1978d42fc11b92d8812a08"
	I1101 10:19:21.538423  751704 cri.go:89] found id: ""
	I1101 10:19:21.538481  751704 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:19:21.553436  751704 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:19:21Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:19:21.553541  751704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:19:21.564329  751704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:19:21.564357  751704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:19:21.564418  751704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:19:21.574610  751704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:19:21.575434  751704 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-680879" does not appear in /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:19:21.575918  751704 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-514161/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-680879" cluster setting kubeconfig missing "no-preload-680879" context setting]
	I1101 10:19:21.576605  751704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:21.578372  751704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:19:21.588950  751704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:19:21.588998  751704 kubeadm.go:602] duration metric: took 24.634289ms to restartPrimaryControlPlane
	I1101 10:19:21.589012  751704 kubeadm.go:403] duration metric: took 86.317698ms to StartCluster
	I1101 10:19:21.589036  751704 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:21.589124  751704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:19:21.591071  751704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:19:21.591409  751704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:19:21.591548  751704 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:19:21.591659  751704 addons.go:70] Setting storage-provisioner=true in profile "no-preload-680879"
	I1101 10:19:21.591674  751704 config.go:182] Loaded profile config "no-preload-680879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:19:21.591684  751704 addons.go:239] Setting addon storage-provisioner=true in "no-preload-680879"
	W1101 10:19:21.591692  751704 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:19:21.591693  751704 addons.go:70] Setting dashboard=true in profile "no-preload-680879"
	I1101 10:19:21.591716  751704 addons.go:239] Setting addon dashboard=true in "no-preload-680879"
	I1101 10:19:21.591724  751704 addons.go:70] Setting default-storageclass=true in profile "no-preload-680879"
	W1101 10:19:21.591734  751704 addons.go:248] addon dashboard should already be in state true
	I1101 10:19:21.591742  751704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-680879"
	I1101 10:19:21.591763  751704 host.go:66] Checking if "no-preload-680879" exists ...
	I1101 10:19:21.591726  751704 host.go:66] Checking if "no-preload-680879" exists ...
	I1101 10:19:21.592128  751704 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:19:21.592358  751704 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:19:21.592395  751704 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:19:21.595061  751704 out.go:179] * Verifying Kubernetes components...
	I1101 10:19:21.596505  751704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:19:21.620285  751704 addons.go:239] Setting addon default-storageclass=true in "no-preload-680879"
	W1101 10:19:21.620312  751704 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:19:21.620343  751704 host.go:66] Checking if "no-preload-680879" exists ...
	I1101 10:19:21.620908  751704 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:19:21.623328  751704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:19:21.623338  751704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:19:21.624570  751704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:19:21.624607  751704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:19:21.624583  751704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:19:18.313513  749992 addons.go:515] duration metric: took 3.293714409s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 10:19:18.314886  749992 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 10:19:18.314911  749992 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 10:19:18.809396  749992 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 10:19:18.814293  749992 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 10:19:18.815913  749992 api_server.go:141] control plane version: v1.28.0
	I1101 10:19:18.815948  749992 api_server.go:131] duration metric: took 506.644406ms to wait for apiserver health ...
	I1101 10:19:18.815958  749992 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:19:18.827251  749992 system_pods.go:59] 8 kube-system pods found
	I1101 10:19:18.827308  749992 system_pods.go:61] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:19:18.827323  749992 system_pods.go:61] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:19:18.827338  749992 system_pods.go:61] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:19:18.827347  749992 system_pods.go:61] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:19:18.827354  749992 system_pods.go:61] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:19:18.827363  749992 system_pods.go:61] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:19:18.827370  749992 system_pods.go:61] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:19:18.827378  749992 system_pods.go:61] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:19:18.827388  749992 system_pods.go:74] duration metric: took 11.422494ms to wait for pod list to return data ...
	I1101 10:19:18.827399  749992 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:19:18.831006  749992 default_sa.go:45] found service account: "default"
	I1101 10:19:18.831052  749992 default_sa.go:55] duration metric: took 3.645079ms for default service account to be created ...
	I1101 10:19:18.831065  749992 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:19:18.837717  749992 system_pods.go:86] 8 kube-system pods found
	I1101 10:19:18.837765  749992 system_pods.go:89] "coredns-5dd5756b68-cprx9" [5a80c854-73fb-4cbf-9cc7-2d22fe39fa2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:19:18.837780  749992 system_pods.go:89] "etcd-old-k8s-version-556573" [f6a17243-d310-4663-b6d5-540769c7dbd4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:19:18.837791  749992 system_pods.go:89] "kindnet-cmzcq" [be7200a1-400a-46fa-9832-af04d5ba8826] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:19:18.837803  749992 system_pods.go:89] "kube-apiserver-old-k8s-version-556573" [a6179fa2-51c7-4dd4-9514-b486e97bacf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:19:18.837812  749992 system_pods.go:89] "kube-controller-manager-old-k8s-version-556573" [a15600e1-5b54-4dba-88ad-6b27d54a818f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:19:18.837821  749992 system_pods.go:89] "kube-proxy-s9fsm" [308c1bec-8f02-4276-bb6a-4d15f8d53e89] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:19:18.837828  749992 system_pods.go:89] "kube-scheduler-old-k8s-version-556573" [c4321eb5-4d46-4ba0-a39b-e679adb7fef5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:19:18.837848  749992 system_pods.go:89] "storage-provisioner" [000bb166-71a6-4e7a-b710-d5502eba8fdc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:19:18.837862  749992 system_pods.go:126] duration metric: took 6.787789ms to wait for k8s-apps to be running ...
	I1101 10:19:18.837872  749992 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:19:18.837930  749992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:19:18.855707  749992 system_svc.go:56] duration metric: took 17.820674ms WaitForService to wait for kubelet
	I1101 10:19:18.855745  749992 kubeadm.go:587] duration metric: took 3.835985401s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:19:18.855768  749992 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:19:18.858938  749992 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:19:18.858968  749992 node_conditions.go:123] node cpu capacity is 8
	I1101 10:19:18.858982  749992 node_conditions.go:105] duration metric: took 3.208896ms to run NodePressure ...
	I1101 10:19:18.858995  749992 start.go:242] waiting for startup goroutines ...
	I1101 10:19:18.859002  749992 start.go:247] waiting for cluster config update ...
	I1101 10:19:18.859013  749992 start.go:256] writing updated cluster config ...
	I1101 10:19:18.859268  749992 ssh_runner.go:195] Run: rm -f paused
	I1101 10:19:18.863732  749992 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:19:18.868963  749992 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-cprx9" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:19:20.875614  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	I1101 10:19:21.624699  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:21.625869  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:19:21.625900  751704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:19:21.625985  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:21.655924  751704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:19:21.655951  751704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:19:21.656033  751704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:19:21.658198  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:21.665947  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:21.684777  751704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:19:21.775433  751704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:19:21.791481  751704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:19:21.793208  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:19:21.793237  751704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:19:21.795242  751704 node_ready.go:35] waiting up to 6m0s for node "no-preload-680879" to be "Ready" ...
	I1101 10:19:21.809200  751704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:19:21.815183  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:19:21.815215  751704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:19:21.842695  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:19:21.842811  751704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:19:21.868910  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:19:21.868943  751704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:19:21.890585  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:19:21.890619  751704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:19:21.908119  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:19:21.908149  751704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:19:21.926133  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:19:21.926165  751704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:19:21.943110  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:19:21.943140  751704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:19:21.959502  751704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:19:21.959536  751704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:19:21.977211  751704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:19:23.222263  751704 node_ready.go:49] node "no-preload-680879" is "Ready"
	I1101 10:19:23.222318  751704 node_ready.go:38] duration metric: took 1.427019057s for node "no-preload-680879" to be "Ready" ...
	I1101 10:19:23.222338  751704 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:19:23.222404  751704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:19:23.746820  751704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.95529649s)
	I1101 10:19:23.746900  751704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.937678754s)
	I1101 10:19:23.747166  751704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.769905549s)
	I1101 10:19:23.747199  751704 api_server.go:72] duration metric: took 2.155750455s to wait for apiserver process to appear ...
	I1101 10:19:23.747216  751704 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:19:23.747238  751704 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:19:23.748776  751704 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-680879 addons enable metrics-server
	
	I1101 10:19:23.751489  751704 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:19:23.751521  751704 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:19:23.755321  751704 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 10:19:23.756169  751704 addons.go:515] duration metric: took 2.16462668s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 10:19:19.896786  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:19.896847  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:22.432926  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:22.433429  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:22.433483  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:22.433571  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:22.470949  734517 cri.go:89] found id: "294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:22.470976  734517 cri.go:89] found id: ""
	I1101 10:19:22.470988  734517 logs.go:282] 1 containers: [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7]
	I1101 10:19:22.471043  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:22.476694  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:22.476768  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:22.510762  734517 cri.go:89] found id: ""
	I1101 10:19:22.510796  734517 logs.go:282] 0 containers: []
	W1101 10:19:22.510807  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:22.510815  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:22.510885  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:22.547801  734517 cri.go:89] found id: ""
	I1101 10:19:22.547861  734517 logs.go:282] 0 containers: []
	W1101 10:19:22.547873  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:22.547882  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:22.547941  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:22.583315  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:22.583341  734517 cri.go:89] found id: ""
	I1101 10:19:22.583352  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:22.583426  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:22.588943  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:22.589045  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:22.628933  734517 cri.go:89] found id: ""
	I1101 10:19:22.628969  734517 logs.go:282] 0 containers: []
	W1101 10:19:22.628980  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:22.628989  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:22.629058  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:22.665509  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:22.665537  734517 cri.go:89] found id: ""
	I1101 10:19:22.665550  734517 logs.go:282] 1 containers: [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:22.665614  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:22.671002  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:22.671079  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:22.703400  734517 cri.go:89] found id: ""
	I1101 10:19:22.703431  734517 logs.go:282] 0 containers: []
	W1101 10:19:22.703442  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:22.703450  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:22.703519  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:22.738119  734517 cri.go:89] found id: ""
	I1101 10:19:22.738157  734517 logs.go:282] 0 containers: []
	W1101 10:19:22.738179  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:22.738195  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:22.738210  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:22.809674  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:22.809699  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:22.809717  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:22.849950  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:22.849990  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:22.906141  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:22.906186  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:22.936474  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:22.936509  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:22.982323  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:22.982374  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:23.026207  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:23.026247  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:23.114983  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:23.115100  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1101 10:19:22.876618  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:25.375173  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:27.375348  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	I1101 10:19:24.247750  751704 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:19:24.252681  751704 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:19:24.252728  751704 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:19:24.747366  751704 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:19:24.751788  751704 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:19:24.752925  751704 api_server.go:141] control plane version: v1.34.1
	I1101 10:19:24.752953  751704 api_server.go:131] duration metric: took 1.005725599s to wait for apiserver health ...
	I1101 10:19:24.752962  751704 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:19:24.756509  751704 system_pods.go:59] 8 kube-system pods found
	I1101 10:19:24.756547  751704 system_pods.go:61] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:19:24.756556  751704 system_pods.go:61] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:19:24.756566  751704 system_pods.go:61] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:19:24.756575  751704 system_pods.go:61] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:19:24.756583  751704 system_pods.go:61] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:19:24.756593  751704 system_pods.go:61] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:19:24.756601  751704 system_pods.go:61] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:19:24.756617  751704 system_pods.go:61] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:19:24.756632  751704 system_pods.go:74] duration metric: took 3.660816ms to wait for pod list to return data ...
	I1101 10:19:24.756644  751704 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:19:24.759226  751704 default_sa.go:45] found service account: "default"
	I1101 10:19:24.759251  751704 default_sa.go:55] duration metric: took 2.59663ms for default service account to be created ...
	I1101 10:19:24.759263  751704 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:19:24.762366  751704 system_pods.go:86] 8 kube-system pods found
	I1101 10:19:24.762401  751704 system_pods.go:89] "coredns-66bc5c9577-rh4z7" [76d75e15-e9dd-4d86-97f2-d24aa8d1e4af] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:19:24.762408  751704 system_pods.go:89] "etcd-no-preload-680879" [3939de6d-be97-45fc-8d21-9abe90802b56] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:19:24.762414  751704 system_pods.go:89] "kindnet-sjzlx" [2be6e8f4-e62c-4075-b883-b34e1b3c71f4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:19:24.762419  751704 system_pods.go:89] "kube-apiserver-no-preload-680879" [9c742728-9a4b-453a-be1a-c7e33498f86c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:19:24.762424  751704 system_pods.go:89] "kube-controller-manager-no-preload-680879" [3ff3f6e5-bee2-48f0-a1b3-9c592ae80156] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:19:24.762430  751704 system_pods.go:89] "kube-proxy-ft2vw" [f097a1a9-0797-4a99-bbd5-4a8a8356f82d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:19:24.762444  751704 system_pods.go:89] "kube-scheduler-no-preload-680879" [60504e8f-872c-4189-826f-8d251e790473] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:19:24.762451  751704 system_pods.go:89] "storage-provisioner" [ff9ec9cf-0c09-4056-b82a-e0f9fd9e880d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:19:24.762462  751704 system_pods.go:126] duration metric: took 3.19248ms to wait for k8s-apps to be running ...
	I1101 10:19:24.762470  751704 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:19:24.762527  751704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:19:24.776310  751704 system_svc.go:56] duration metric: took 13.822575ms WaitForService to wait for kubelet
	I1101 10:19:24.776348  751704 kubeadm.go:587] duration metric: took 3.184901564s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:19:24.776374  751704 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:19:24.779587  751704 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:19:24.779617  751704 node_conditions.go:123] node cpu capacity is 8
	I1101 10:19:24.779634  751704 node_conditions.go:105] duration metric: took 3.254067ms to run NodePressure ...
	I1101 10:19:24.779651  751704 start.go:242] waiting for startup goroutines ...
	I1101 10:19:24.779660  751704 start.go:247] waiting for cluster config update ...
	I1101 10:19:24.779676  751704 start.go:256] writing updated cluster config ...
	I1101 10:19:24.779992  751704 ssh_runner.go:195] Run: rm -f paused
	I1101 10:19:24.784439  751704 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:19:24.789934  751704 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rh4z7" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:19:26.796088  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:28.796953  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	I1101 10:19:25.639713  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1101 10:19:29.875431  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:31.875777  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:30.797265  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:32.797768  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	I1101 10:19:30.640105  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:19:30.640224  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:30.640309  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:30.669948  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:30.669974  734517 cri.go:89] found id: "294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:30.669980  734517 cri.go:89] found id: ""
	I1101 10:19:30.669990  734517 logs.go:282] 2 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7]
	I1101 10:19:30.670068  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:30.674384  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:30.678429  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:30.678513  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:30.709314  734517 cri.go:89] found id: ""
	I1101 10:19:30.709345  734517 logs.go:282] 0 containers: []
	W1101 10:19:30.709354  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:30.709361  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:30.709420  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:30.748279  734517 cri.go:89] found id: ""
	I1101 10:19:30.748310  734517 logs.go:282] 0 containers: []
	W1101 10:19:30.748322  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:30.748330  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:30.748392  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:30.790674  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:30.790703  734517 cri.go:89] found id: ""
	I1101 10:19:30.790714  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:30.790780  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:30.796615  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:30.796713  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:30.840009  734517 cri.go:89] found id: ""
	I1101 10:19:30.840048  734517 logs.go:282] 0 containers: []
	W1101 10:19:30.840060  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:30.840069  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:30.840477  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:30.884710  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:30.884740  734517 cri.go:89] found id: ""
	I1101 10:19:30.884752  734517 logs.go:282] 1 containers: [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:30.884824  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:30.890423  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:30.890501  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:30.929390  734517 cri.go:89] found id: ""
	I1101 10:19:30.929423  734517 logs.go:282] 0 containers: []
	W1101 10:19:30.929445  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:30.929455  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:30.929607  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:30.973670  734517 cri.go:89] found id: ""
	I1101 10:19:30.973704  734517 logs.go:282] 0 containers: []
	W1101 10:19:30.973715  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:30.973735  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:30.973754  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:31.004505  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:31.004541  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:34.375939  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:36.375989  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:35.296268  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:37.794812  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:38.874669  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:41.376959  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:39.796287  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:41.796342  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	I1101 10:19:41.094725  734517 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.09015546s)
	W1101 10:19:41.094778  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1101 10:19:41.094787  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:41.094800  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:41.124251  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:41.124290  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:41.180573  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:19:41.180619  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:41.216766  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:41.216811  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:41.251813  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:41.251870  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:41.303462  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:41.303504  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:41.339779  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:41.339816  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:43.915927  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1101 10:19:43.875685  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:45.875727  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	W1101 10:19:44.295613  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:46.295870  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:48.296468  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	I1101 10:19:45.568334  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:34984->192.168.103.2:8443: read: connection reset by peer
	I1101 10:19:45.568421  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:45.568493  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:45.599017  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:45.599044  734517 cri.go:89] found id: "294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:45.599050  734517 cri.go:89] found id: ""
	I1101 10:19:45.599060  734517 logs.go:282] 2 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7]
	I1101 10:19:45.599116  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:45.603584  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:45.607753  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:45.607819  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:45.636802  734517 cri.go:89] found id: ""
	I1101 10:19:45.636830  734517 logs.go:282] 0 containers: []
	W1101 10:19:45.636868  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:45.636876  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:45.636940  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:45.666802  734517 cri.go:89] found id: ""
	I1101 10:19:45.666828  734517 logs.go:282] 0 containers: []
	W1101 10:19:45.666873  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:45.666880  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:45.666932  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:45.695967  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:45.695997  734517 cri.go:89] found id: ""
	I1101 10:19:45.696008  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:45.696079  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:45.700314  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:45.700384  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:45.728533  734517 cri.go:89] found id: ""
	I1101 10:19:45.728571  734517 logs.go:282] 0 containers: []
	W1101 10:19:45.728580  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:45.728586  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:45.728648  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:45.758235  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:45.758263  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:45.758269  734517 cri.go:89] found id: ""
	I1101 10:19:45.758281  734517 logs.go:282] 2 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:45.758348  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:45.762777  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:45.766925  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:45.767004  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:45.796444  734517 cri.go:89] found id: ""
	I1101 10:19:45.796470  734517 logs.go:282] 0 containers: []
	W1101 10:19:45.796481  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:45.796488  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:45.796551  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:45.825314  734517 cri.go:89] found id: ""
	I1101 10:19:45.825342  734517 logs.go:282] 0 containers: []
	W1101 10:19:45.825354  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:45.825374  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:19:45.825391  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:45.855107  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:45.855134  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:45.885414  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:45.885442  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:45.918148  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:19:45.918184  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:45.951217  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:45.951252  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:46.006867  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:46.006915  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:46.085345  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:46.085386  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:46.104730  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:46.104766  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:46.164389  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:46.164409  734517 logs.go:123] Gathering logs for kube-apiserver [294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7] ...
	I1101 10:19:46.164425  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 294a1a78c7aea17249c997e80a6c8ca8517b766609970b4f11e34351b8de93e7"
	I1101 10:19:46.200848  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:46.200885  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:48.750693  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:48.751183  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:48.751240  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:48.751295  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:48.781751  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:48.781779  734517 cri.go:89] found id: ""
	I1101 10:19:48.781791  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:19:48.781864  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:48.786232  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:48.786310  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:48.816117  734517 cri.go:89] found id: ""
	I1101 10:19:48.816143  734517 logs.go:282] 0 containers: []
	W1101 10:19:48.816159  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:48.816166  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:48.816240  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:48.846244  734517 cri.go:89] found id: ""
	I1101 10:19:48.846276  734517 logs.go:282] 0 containers: []
	W1101 10:19:48.846285  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:48.846292  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:48.846352  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:48.876090  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:48.876117  734517 cri.go:89] found id: ""
	I1101 10:19:48.876126  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:48.876178  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:48.880724  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:48.880811  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:48.909280  734517 cri.go:89] found id: ""
	I1101 10:19:48.909305  734517 logs.go:282] 0 containers: []
	W1101 10:19:48.909313  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:48.909319  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:48.909385  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:48.939374  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:48.939404  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:48.939410  734517 cri.go:89] found id: ""
	I1101 10:19:48.939421  734517 logs.go:282] 2 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:48.939482  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:48.943821  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:48.948103  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:48.948164  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:48.977963  734517 cri.go:89] found id: ""
	I1101 10:19:48.977988  734517 logs.go:282] 0 containers: []
	W1101 10:19:48.977996  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:48.978002  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:48.978055  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:49.008092  734517 cri.go:89] found id: ""
	I1101 10:19:49.008120  734517 logs.go:282] 0 containers: []
	W1101 10:19:49.008131  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:49.008178  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:49.008211  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:49.068277  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:49.068309  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:49.068334  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:49.118388  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:19:49.118430  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:49.149162  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:49.149194  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:49.183198  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:49.183239  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:49.267756  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:49.267799  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:49.287461  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:19:49.287495  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:49.323714  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:49.323755  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:49.351967  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:49.351998  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1101 10:19:48.375124  749992 pod_ready.go:104] pod "coredns-5dd5756b68-cprx9" is not "Ready", error: <nil>
	I1101 10:19:50.375076  749992 pod_ready.go:94] pod "coredns-5dd5756b68-cprx9" is "Ready"
	I1101 10:19:50.375111  749992 pod_ready.go:86] duration metric: took 31.506116562s for pod "coredns-5dd5756b68-cprx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.377714  749992 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.381431  749992 pod_ready.go:94] pod "etcd-old-k8s-version-556573" is "Ready"
	I1101 10:19:50.381458  749992 pod_ready.go:86] duration metric: took 3.720753ms for pod "etcd-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.384145  749992 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.387975  749992 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-556573" is "Ready"
	I1101 10:19:50.388002  749992 pod_ready.go:86] duration metric: took 3.831146ms for pod "kube-apiserver-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.390725  749992 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.574275  749992 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-556573" is "Ready"
	I1101 10:19:50.574314  749992 pod_ready.go:86] duration metric: took 183.564409ms for pod "kube-controller-manager-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:50.774176  749992 pod_ready.go:83] waiting for pod "kube-proxy-s9fsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:51.173486  749992 pod_ready.go:94] pod "kube-proxy-s9fsm" is "Ready"
	I1101 10:19:51.173516  749992 pod_ready.go:86] duration metric: took 399.310179ms for pod "kube-proxy-s9fsm" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:51.374482  749992 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:51.773087  749992 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-556573" is "Ready"
	I1101 10:19:51.773122  749992 pod_ready.go:86] duration metric: took 398.611575ms for pod "kube-scheduler-old-k8s-version-556573" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:51.773138  749992 pod_ready.go:40] duration metric: took 32.909366231s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:19:51.820290  749992 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1101 10:19:51.822627  749992 out.go:203] 
	W1101 10:19:51.823943  749992 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 10:19:51.825182  749992 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:19:51.826371  749992 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-556573" cluster and "default" namespace by default
	W1101 10:19:50.795787  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:52.796811  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	I1101 10:19:51.917898  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:51.918327  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:51.918392  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:51.918454  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:51.950042  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:51.950065  734517 cri.go:89] found id: ""
	I1101 10:19:51.950076  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:19:51.950137  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:51.954479  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:51.954556  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:51.986469  734517 cri.go:89] found id: ""
	I1101 10:19:51.986495  734517 logs.go:282] 0 containers: []
	W1101 10:19:51.986502  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:51.986509  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:51.986555  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:52.015764  734517 cri.go:89] found id: ""
	I1101 10:19:52.015794  734517 logs.go:282] 0 containers: []
	W1101 10:19:52.015805  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:52.015814  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:52.015909  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:52.044792  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:52.044815  734517 cri.go:89] found id: ""
	I1101 10:19:52.044823  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:52.044917  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:52.049731  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:52.049813  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:52.081371  734517 cri.go:89] found id: ""
	I1101 10:19:52.081402  734517 logs.go:282] 0 containers: []
	W1101 10:19:52.081414  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:52.081423  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:52.081482  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:52.114665  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:52.114704  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:52.114816  734517 cri.go:89] found id: ""
	I1101 10:19:52.114828  734517 logs.go:282] 2 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:52.115065  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:52.120950  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:52.126220  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:52.126305  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:52.161044  734517 cri.go:89] found id: ""
	I1101 10:19:52.161072  734517 logs.go:282] 0 containers: []
	W1101 10:19:52.161081  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:52.161088  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:52.161150  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:52.195536  734517 cri.go:89] found id: ""
	I1101 10:19:52.195560  734517 logs.go:282] 0 containers: []
	W1101 10:19:52.195568  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:52.195586  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:19:52.195598  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:52.236807  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:52.236871  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:52.269035  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:52.269075  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:52.357207  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:52.357253  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:52.382568  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:52.382630  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:52.445059  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:52.445081  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:52.445100  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:52.496306  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:19:52.496351  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:52.525982  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:52.526012  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:52.583145  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:52.583185  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1101 10:19:55.296501  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	W1101 10:19:57.796181  751704 pod_ready.go:104] pod "coredns-66bc5c9577-rh4z7" is not "Ready", error: <nil>
	I1101 10:19:58.796405  751704 pod_ready.go:94] pod "coredns-66bc5c9577-rh4z7" is "Ready"
	I1101 10:19:58.796436  751704 pod_ready.go:86] duration metric: took 34.006472134s for pod "coredns-66bc5c9577-rh4z7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:58.799179  751704 pod_ready.go:83] waiting for pod "etcd-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:58.803734  751704 pod_ready.go:94] pod "etcd-no-preload-680879" is "Ready"
	I1101 10:19:58.803766  751704 pod_ready.go:86] duration metric: took 4.559043ms for pod "etcd-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:58.806246  751704 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:58.810722  751704 pod_ready.go:94] pod "kube-apiserver-no-preload-680879" is "Ready"
	I1101 10:19:58.810755  751704 pod_ready.go:86] duration metric: took 4.482193ms for pod "kube-apiserver-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:58.813105  751704 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:55.118905  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:55.119416  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:55.119479  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:55.119530  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:55.150015  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:55.150047  734517 cri.go:89] found id: ""
	I1101 10:19:55.150056  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:19:55.150106  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:55.155248  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:55.155325  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:55.186955  734517 cri.go:89] found id: ""
	I1101 10:19:55.186989  734517 logs.go:282] 0 containers: []
	W1101 10:19:55.187003  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:55.187012  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:55.187080  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:55.219523  734517 cri.go:89] found id: ""
	I1101 10:19:55.219548  734517 logs.go:282] 0 containers: []
	W1101 10:19:55.219557  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:55.219564  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:55.219615  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:55.250437  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:55.250461  734517 cri.go:89] found id: ""
	I1101 10:19:55.250471  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:55.250535  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:55.255162  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:55.255234  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:55.286379  734517 cri.go:89] found id: ""
	I1101 10:19:55.286416  734517 logs.go:282] 0 containers: []
	W1101 10:19:55.286427  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:55.286435  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:55.286512  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:55.319680  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:55.319707  734517 cri.go:89] found id: "b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:55.319712  734517 cri.go:89] found id: ""
	I1101 10:19:55.319723  734517 logs.go:282] 2 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99]
	I1101 10:19:55.319793  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:55.324355  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:55.328464  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:55.328548  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:55.359344  734517 cri.go:89] found id: ""
	I1101 10:19:55.359379  734517 logs.go:282] 0 containers: []
	W1101 10:19:55.359391  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:55.359399  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:55.359454  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:55.389253  734517 cri.go:89] found id: ""
	I1101 10:19:55.389285  734517 logs.go:282] 0 containers: []
	W1101 10:19:55.389294  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:55.389314  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:55.389331  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:55.408604  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:55.408658  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:55.458100  734517 logs.go:123] Gathering logs for kube-controller-manager [b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99] ...
	I1101 10:19:55.458145  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b349c8cebbab9c02860c362f6155ed648175d05b6d9089642c1b53dafcf18b99"
	I1101 10:19:55.488110  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:55.488149  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:55.544178  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:55.544232  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:55.603764  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:55.603791  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:19:55.603810  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:55.638460  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:19:55.638498  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:55.667868  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:55.667897  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:55.700741  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:55.700772  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:58.281556  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:19:58.282113  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:19:58.282172  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:19:58.282237  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:19:58.313748  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:58.313774  734517 cri.go:89] found id: ""
	I1101 10:19:58.313783  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:19:58.313848  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:58.318094  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:19:58.318154  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:19:58.347645  734517 cri.go:89] found id: ""
	I1101 10:19:58.347670  734517 logs.go:282] 0 containers: []
	W1101 10:19:58.347678  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:19:58.347693  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:19:58.347744  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:19:58.377365  734517 cri.go:89] found id: ""
	I1101 10:19:58.377394  734517 logs.go:282] 0 containers: []
	W1101 10:19:58.377408  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:19:58.377415  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:19:58.377501  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:19:58.406919  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:58.406943  734517 cri.go:89] found id: ""
	I1101 10:19:58.406953  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:19:58.407013  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:58.411320  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:19:58.411395  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:19:58.441180  734517 cri.go:89] found id: ""
	I1101 10:19:58.441210  734517 logs.go:282] 0 containers: []
	W1101 10:19:58.441221  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:19:58.441229  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:19:58.441289  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:19:58.471079  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:58.471107  734517 cri.go:89] found id: ""
	I1101 10:19:58.471124  734517 logs.go:282] 1 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718]
	I1101 10:19:58.471190  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:19:58.476014  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:19:58.476116  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:19:58.506198  734517 cri.go:89] found id: ""
	I1101 10:19:58.506243  734517 logs.go:282] 0 containers: []
	W1101 10:19:58.506255  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:19:58.506263  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:19:58.506324  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:19:58.539304  734517 cri.go:89] found id: ""
	I1101 10:19:58.539334  734517 logs.go:282] 0 containers: []
	W1101 10:19:58.539344  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:19:58.539359  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:19:58.539377  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:19:58.575009  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:19:58.575046  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:19:58.625036  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:19:58.625081  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:19:58.654912  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:19:58.654948  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:19:58.707728  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:19:58.707771  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:19:58.741875  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:19:58.741908  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:19:58.834649  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:19:58.834707  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:19:58.855809  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:19:58.855889  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:19:58.919467  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:19:58.994757  751704 pod_ready.go:94] pod "kube-controller-manager-no-preload-680879" is "Ready"
	I1101 10:19:58.994786  751704 pod_ready.go:86] duration metric: took 181.653237ms for pod "kube-controller-manager-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:59.195422  751704 pod_ready.go:83] waiting for pod "kube-proxy-ft2vw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:59.594857  751704 pod_ready.go:94] pod "kube-proxy-ft2vw" is "Ready"
	I1101 10:19:59.594891  751704 pod_ready.go:86] duration metric: took 399.432038ms for pod "kube-proxy-ft2vw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:19:59.794059  751704 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:20:00.194949  751704 pod_ready.go:94] pod "kube-scheduler-no-preload-680879" is "Ready"
	I1101 10:20:00.194993  751704 pod_ready.go:86] duration metric: took 400.90442ms for pod "kube-scheduler-no-preload-680879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:20:00.195011  751704 pod_ready.go:40] duration metric: took 35.410529293s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:20:00.247126  751704 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:20:00.249139  751704 out.go:179] * Done! kubectl is now configured to use "no-preload-680879" cluster and "default" namespace by default
	I1101 10:20:01.420696  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:20:01.421437  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:20:01.421513  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:20:01.421585  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:20:01.452654  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:20:01.452686  734517 cri.go:89] found id: ""
	I1101 10:20:01.452697  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:20:01.452773  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:01.457474  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:20:01.457582  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:20:01.488979  734517 cri.go:89] found id: ""
	I1101 10:20:01.489008  734517 logs.go:282] 0 containers: []
	W1101 10:20:01.489019  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:20:01.489028  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:20:01.489094  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:20:01.519726  734517 cri.go:89] found id: ""
	I1101 10:20:01.519753  734517 logs.go:282] 0 containers: []
	W1101 10:20:01.519761  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:20:01.519768  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:20:01.519817  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:20:01.550139  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:01.550163  734517 cri.go:89] found id: ""
	I1101 10:20:01.550172  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:20:01.550281  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:01.554678  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:20:01.554749  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:20:01.585678  734517 cri.go:89] found id: ""
	I1101 10:20:01.585713  734517 logs.go:282] 0 containers: []
	W1101 10:20:01.585726  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:20:01.585736  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:20:01.585805  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:20:01.616144  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:01.616177  734517 cri.go:89] found id: ""
	I1101 10:20:01.616190  734517 logs.go:282] 1 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718]
	I1101 10:20:01.616264  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:01.620521  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:20:01.620597  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:20:01.650942  734517 cri.go:89] found id: ""
	I1101 10:20:01.650969  734517 logs.go:282] 0 containers: []
	W1101 10:20:01.650978  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:20:01.650984  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:20:01.651038  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:20:01.683160  734517 cri.go:89] found id: ""
	I1101 10:20:01.683193  734517 logs.go:282] 0 containers: []
	W1101 10:20:01.683206  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:20:01.683222  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:20:01.683242  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:20:01.718993  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:20:01.719036  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:01.767980  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:20:01.768024  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:01.799251  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:20:01.799285  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:20:01.858737  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:20:01.858783  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:20:01.893940  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:20:01.893970  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:20:01.980857  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:20:01.980905  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:20:02.002755  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:20:02.002794  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:20:02.064896  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:20:04.566524  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:20:04.567108  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:20:04.567192  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:20:04.567262  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:20:04.599913  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:20:04.599938  734517 cri.go:89] found id: ""
	I1101 10:20:04.599948  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:20:04.599999  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:04.604290  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:20:04.604357  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:20:04.638516  734517 cri.go:89] found id: ""
	I1101 10:20:04.638551  734517 logs.go:282] 0 containers: []
	W1101 10:20:04.638562  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:20:04.638570  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:20:04.638637  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:20:04.668368  734517 cri.go:89] found id: ""
	I1101 10:20:04.668399  734517 logs.go:282] 0 containers: []
	W1101 10:20:04.668407  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:20:04.668417  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:20:04.668476  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:20:04.699489  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:04.699512  734517 cri.go:89] found id: ""
	I1101 10:20:04.699521  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:20:04.699573  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:04.703986  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:20:04.704058  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:20:04.734280  734517 cri.go:89] found id: ""
	I1101 10:20:04.734328  734517 logs.go:282] 0 containers: []
	W1101 10:20:04.734344  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:20:04.734354  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:20:04.734424  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:20:04.763968  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:04.763993  734517 cri.go:89] found id: ""
	I1101 10:20:04.764002  734517 logs.go:282] 1 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718]
	I1101 10:20:04.764055  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:04.768504  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:20:04.768584  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:20:04.798325  734517 cri.go:89] found id: ""
	I1101 10:20:04.798360  734517 logs.go:282] 0 containers: []
	W1101 10:20:04.798371  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:20:04.798380  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:20:04.798452  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:20:04.830627  734517 cri.go:89] found id: ""
	I1101 10:20:04.830661  734517 logs.go:282] 0 containers: []
	W1101 10:20:04.830672  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:20:04.830684  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:20:04.830697  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	
	
	==> CRI-O <==
	Nov 01 10:19:36 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:36.825852582Z" level=info msg="Created container 60c3ea523dc7210a6abdb204c3151d0227b798a7fb181e25b264e4e9037ad6a7: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wrwks/kubernetes-dashboard" id=0d4c0cf4-4471-4e44-8de2-969d5f185774 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:36 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:36.826569841Z" level=info msg="Starting container: 60c3ea523dc7210a6abdb204c3151d0227b798a7fb181e25b264e4e9037ad6a7" id=3b2ab99a-5347-4440-ac6d-f78e7b2be0cf name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:19:36 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:36.828402881Z" level=info msg="Started container" PID=1712 containerID=60c3ea523dc7210a6abdb204c3151d0227b798a7fb181e25b264e4e9037ad6a7 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wrwks/kubernetes-dashboard id=3b2ab99a-5347-4440-ac6d-f78e7b2be0cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d0892a7e37ec40dccf925d91bf95c6a7631952ff9b460a7ab5c7a1364243258
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.358019938Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a97692c7-db44-45e1-8861-5b8d27039432 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.359013674Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e5668469-4491-4066-ab72-ed3d7566d8cd name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.360168835Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7a50cd59-38b4-442d-a42b-34fe40f89274 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.360339725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.364746988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.365000253Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7efd9bce602a8f703413d4bc6ac93cf2f49ccf5576287846ab43932b910c6d14/merged/etc/passwd: no such file or directory"
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.365040164Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7efd9bce602a8f703413d4bc6ac93cf2f49ccf5576287846ab43932b910c6d14/merged/etc/group: no such file or directory"
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.365338721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.399346433Z" level=info msg="Created container eb353e58c0fc17fac5140bb533292ff0eede9c2a117a3f00b2eda7320c1197f4: kube-system/storage-provisioner/storage-provisioner" id=7a50cd59-38b4-442d-a42b-34fe40f89274 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.400021439Z" level=info msg="Starting container: eb353e58c0fc17fac5140bb533292ff0eede9c2a117a3f00b2eda7320c1197f4" id=a4df1c56-c6de-40c6-b7ec-161939e0fdb8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:19:49 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:49.402019572Z" level=info msg="Started container" PID=1734 containerID=eb353e58c0fc17fac5140bb533292ff0eede9c2a117a3f00b2eda7320c1197f4 description=kube-system/storage-provisioner/storage-provisioner id=a4df1c56-c6de-40c6-b7ec-161939e0fdb8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=86ffaf279f28493105abc4d6cdef7ee4b4916318cfdc6726c7019884bd8fb66b
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.213065702Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=65673f1b-4716-4a22-9041-548fb5c30e6d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.2141801Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c8314990-d257-45b9-904b-033522077626 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.215295668Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs/dashboard-metrics-scraper" id=0d0608a7-a1c9-493d-bc75-4fdd1ebe556f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.215465586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.221515306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.222222334Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.262952532Z" level=info msg="Created container 1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs/dashboard-metrics-scraper" id=0d0608a7-a1c9-493d-bc75-4fdd1ebe556f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.2637673Z" level=info msg="Starting container: 1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc" id=6f4428a6-14f4-45ff-ab12-4efa9ea82e30 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.266227172Z" level=info msg="Started container" PID=1771 containerID=1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs/dashboard-metrics-scraper id=6f4428a6-14f4-45ff-ab12-4efa9ea82e30 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02b250581c808b724e1fe1c8794c41e7769fc7df53a3228427283892725055e1
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.371431563Z" level=info msg="Removing container: 9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12" id=85b60ce8-6a33-48bf-b5ac-57df4626d63b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:19:52 old-k8s-version-556573 crio[565]: time="2025-11-01T10:19:52.383253787Z" level=info msg="Removed container 9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs/dashboard-metrics-scraper" id=85b60ce8-6a33-48bf-b5ac-57df4626d63b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1cca6171f6e63       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   02b250581c808       dashboard-metrics-scraper-5f989dc9cf-xdjzs       kubernetes-dashboard
	eb353e58c0fc1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   86ffaf279f284       storage-provisioner                              kube-system
	60c3ea523dc72       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   32 seconds ago      Running             kubernetes-dashboard        0                   1d0892a7e37ec       kubernetes-dashboard-8694d4445c-wrwks            kubernetes-dashboard
	17a38fc632529       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           50 seconds ago      Running             coredns                     0                   a31721273572e       coredns-5dd5756b68-cprx9                         kube-system
	86f363e26ca1a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   771767b62109a       busybox                                          default
	afb66b64e1b12       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           50 seconds ago      Running             kube-proxy                  0                   ca314b9d29594       kube-proxy-s9fsm                                 kube-system
	8fd6240f85ba7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   86ffaf279f284       storage-provisioner                              kube-system
	39fe07ee60bf7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   0c8ad63b226c6       kindnet-cmzcq                                    kube-system
	f7ba02ac93628       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   05b82e5667c49       etcd-old-k8s-version-556573                      kube-system
	898589e23f303       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   f5b0e6f9cfaf9       kube-apiserver-old-k8s-version-556573            kube-system
	def0c7222196b       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   37719e333ec60       kube-scheduler-old-k8s-version-556573            kube-system
	34df676c07e5e       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   dbe5dbd771c26       kube-controller-manager-old-k8s-version-556573   kube-system
	
	
	==> coredns [17a38fc632529ff81911abfb211dcd7b07d60fd60c225ccae529e36e62d8b497] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38979 - 57152 "HINFO IN 2696036869424178194.8019122094304270670. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.040607808s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-556573
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-556573
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=old-k8s-version-556573
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_18_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:18:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-556573
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:19:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:19:47 +0000   Sat, 01 Nov 2025 10:18:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:19:47 +0000   Sat, 01 Nov 2025 10:18:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:19:47 +0000   Sat, 01 Nov 2025 10:18:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:19:47 +0000   Sat, 01 Nov 2025 10:18:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-556573
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                684343d3-91b0-49c0-8416-d6f599882a42
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-cprx9                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-old-k8s-version-556573                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-cmzcq                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-556573             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-556573    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-s9fsm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-556573             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-xdjzs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-wrwks             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-556573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node old-k8s-version-556573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                 node-controller  Node old-k8s-version-556573 event: Registered Node old-k8s-version-556573 in Controller
	  Normal  NodeReady                93s                  kubelet          Node old-k8s-version-556573 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node old-k8s-version-556573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node old-k8s-version-556573 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                  node-controller  Node old-k8s-version-556573 event: Registered Node old-k8s-version-556573 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [f7ba02ac9362802eef20c5f8870a35d429e636eb86c22620f260caf726977133] <==
	{"level":"info","ts":"2025-11-01T10:19:14.834417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-11-01T10:19:14.834597Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:19:14.834681Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:19:14.83495Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-01T10:19:14.835202Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:19:14.83527Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:19:14.841345Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T10:19:14.841638Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T10:19:14.841697Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T10:19:14.841774Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-01T10:19:14.841801Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-01T10:19:16.209344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T10:19:16.209389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:19:16.209404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-01T10:19:16.209416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T10:19:16.209421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-01T10:19:16.209429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-11-01T10:19:16.209437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-01T10:19:16.210476Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-556573 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:19:16.210486Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:19:16.210504Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:19:16.210753Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:19:16.210782Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T10:19:16.211774Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T10:19:16.211777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 10:20:09 up  3:02,  0 user,  load average: 2.45, 3.26, 2.69
	Linux old-k8s-version-556573 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [39fe07ee60bf7ed7e063e6b8673b642d58d70c7d696018d876b8bdb6e0d86d70] <==
	I1101 10:19:18.805266       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:19:18.805523       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 10:19:18.805669       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:19:18.805689       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:19:18.805702       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:19:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:19:19.008795       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:19:19.008815       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:19:19.008824       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:19:19.008974       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:19:19.450982       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:19:19.451025       1 metrics.go:72] Registering metrics
	I1101 10:19:19.451235       1 controller.go:711] "Syncing nftables rules"
	I1101 10:19:29.008964       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:19:29.009014       1 main.go:301] handling current node
	I1101 10:19:39.009231       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:19:39.009271       1 main.go:301] handling current node
	I1101 10:19:49.009354       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:19:49.009392       1 main.go:301] handling current node
	I1101 10:19:59.009459       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:19:59.009508       1 main.go:301] handling current node
	I1101 10:20:09.015156       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:20:09.015187       1 main.go:301] handling current node
	
	
	==> kube-apiserver [898589e23f303c22d96fcb1dea82d386d8e8ed945f8c83a07c7f63c935471dbd] <==
	I1101 10:19:17.199606       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1101 10:19:17.256428       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 10:19:17.299946       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 10:19:17.300008       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 10:19:17.300068       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 10:19:17.300081       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 10:19:17.300119       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 10:19:17.300206       1 aggregator.go:166] initial CRD sync complete...
	I1101 10:19:17.300219       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 10:19:17.300226       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:19:17.300233       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:19:17.300520       1 shared_informer.go:318] Caches are synced for configmaps
	E1101 10:19:17.306165       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:19:17.338021       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:19:18.144728       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 10:19:18.184545       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 10:19:18.207015       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:19:18.212350       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:19:18.221671       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:19:18.231761       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 10:19:18.286454       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.66.66"}
	I1101 10:19:18.301514       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.27.153"}
	I1101 10:19:30.163859       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 10:19:30.190828       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 10:19:30.194594       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [34df676c07e5e1c97b53a43963c2ebbd436e0bd1bf7587e9f70aea3ccac71699] <==
	I1101 10:19:30.202165       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.134µs"
	I1101 10:19:30.204310       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="20.954631ms"
	I1101 10:19:30.206787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="23.097703ms"
	I1101 10:19:30.217112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="12.734506ms"
	I1101 10:19:30.217238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.661µs"
	I1101 10:19:30.217286       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="26.3µs"
	I1101 10:19:30.218954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.871µs"
	I1101 10:19:30.220173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.313729ms"
	I1101 10:19:30.220306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="82.786µs"
	I1101 10:19:30.227908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="116.615µs"
	I1101 10:19:30.266111       1 shared_informer.go:318] Caches are synced for disruption
	I1101 10:19:30.300187       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:19:30.386448       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:19:30.705171       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:19:30.763422       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:19:30.763459       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 10:19:33.322367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.183µs"
	I1101 10:19:34.327291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.03µs"
	I1101 10:19:35.330174       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.055µs"
	I1101 10:19:37.342658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.345868ms"
	I1101 10:19:37.342751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="56.315µs"
	I1101 10:19:50.333398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.973899ms"
	I1101 10:19:50.333539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.952µs"
	I1101 10:19:52.383048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.316µs"
	I1101 10:20:00.520806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="183.118µs"
	
	
	==> kube-proxy [afb66b64e1b12d5df0e760a5855c578f0d4a4b6656cb02a4aee48ff926e6c3ed] <==
	I1101 10:19:18.621708       1 server_others.go:69] "Using iptables proxy"
	I1101 10:19:18.631742       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1101 10:19:18.650169       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:19:18.653195       1 server_others.go:152] "Using iptables Proxier"
	I1101 10:19:18.653248       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 10:19:18.653258       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 10:19:18.653291       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 10:19:18.653572       1 server.go:846] "Version info" version="v1.28.0"
	I1101 10:19:18.653641       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:19:18.654378       1 config.go:188] "Starting service config controller"
	I1101 10:19:18.654444       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 10:19:18.655277       1 config.go:97] "Starting endpoint slice config controller"
	I1101 10:19:18.655446       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 10:19:18.655535       1 config.go:315] "Starting node config controller"
	I1101 10:19:18.655594       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 10:19:18.755762       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 10:19:18.755813       1 shared_informer.go:318] Caches are synced for node config
	I1101 10:19:18.755805       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [def0c7222196bef86484e9e3c0a80fd1e6c0281c8d8ab1bbf3ec0fb56299940b] <==
	I1101 10:19:15.398438       1 serving.go:348] Generated self-signed cert in-memory
	I1101 10:19:17.268557       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 10:19:17.268583       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:19:17.272532       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1101 10:19:17.272556       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:19:17.272569       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1101 10:19:17.272581       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 10:19:17.272588       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:19:17.272607       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 10:19:17.274620       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 10:19:17.274681       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 10:19:17.373518       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 10:19:17.373591       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1101 10:19:17.373523       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:19:30 old-k8s-version-556573 kubelet[724]: I1101 10:19:30.207931     724 topology_manager.go:215] "Topology Admit Handler" podUID="a38386b4-80d8-4037-8ca8-f9885dd37c2d" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-xdjzs"
	Nov 01 10:19:30 old-k8s-version-556573 kubelet[724]: I1101 10:19:30.321008     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a38386b4-80d8-4037-8ca8-f9885dd37c2d-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-xdjzs\" (UID: \"a38386b4-80d8-4037-8ca8-f9885dd37c2d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs"
	Nov 01 10:19:30 old-k8s-version-556573 kubelet[724]: I1101 10:19:30.321077     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp8b4\" (UniqueName: \"kubernetes.io/projected/a38386b4-80d8-4037-8ca8-f9885dd37c2d-kube-api-access-rp8b4\") pod \"dashboard-metrics-scraper-5f989dc9cf-xdjzs\" (UID: \"a38386b4-80d8-4037-8ca8-f9885dd37c2d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs"
	Nov 01 10:19:30 old-k8s-version-556573 kubelet[724]: I1101 10:19:30.321189     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9cjn\" (UniqueName: \"kubernetes.io/projected/5b1c4fe0-25e6-40ca-989f-123a98c5db4c-kube-api-access-d9cjn\") pod \"kubernetes-dashboard-8694d4445c-wrwks\" (UID: \"5b1c4fe0-25e6-40ca-989f-123a98c5db4c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wrwks"
	Nov 01 10:19:30 old-k8s-version-556573 kubelet[724]: I1101 10:19:30.321242     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5b1c4fe0-25e6-40ca-989f-123a98c5db4c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-wrwks\" (UID: \"5b1c4fe0-25e6-40ca-989f-123a98c5db4c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wrwks"
	Nov 01 10:19:33 old-k8s-version-556573 kubelet[724]: I1101 10:19:33.309210     724 scope.go:117] "RemoveContainer" containerID="bde06aead925fe64085c895a5eb0c5c67f24a46a77928cbf06e2e46734e7ef37"
	Nov 01 10:19:34 old-k8s-version-556573 kubelet[724]: I1101 10:19:34.313899     724 scope.go:117] "RemoveContainer" containerID="bde06aead925fe64085c895a5eb0c5c67f24a46a77928cbf06e2e46734e7ef37"
	Nov 01 10:19:34 old-k8s-version-556573 kubelet[724]: I1101 10:19:34.314097     724 scope.go:117] "RemoveContainer" containerID="9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12"
	Nov 01 10:19:34 old-k8s-version-556573 kubelet[724]: E1101 10:19:34.314475     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xdjzs_kubernetes-dashboard(a38386b4-80d8-4037-8ca8-f9885dd37c2d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs" podUID="a38386b4-80d8-4037-8ca8-f9885dd37c2d"
	Nov 01 10:19:35 old-k8s-version-556573 kubelet[724]: I1101 10:19:35.318301     724 scope.go:117] "RemoveContainer" containerID="9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12"
	Nov 01 10:19:35 old-k8s-version-556573 kubelet[724]: E1101 10:19:35.318729     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xdjzs_kubernetes-dashboard(a38386b4-80d8-4037-8ca8-f9885dd37c2d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs" podUID="a38386b4-80d8-4037-8ca8-f9885dd37c2d"
	Nov 01 10:19:37 old-k8s-version-556573 kubelet[724]: I1101 10:19:37.337679     724 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wrwks" podStartSLOduration=1.0881186999999999 podCreationTimestamp="2025-11-01 10:19:30 +0000 UTC" firstStartedPulling="2025-11-01 10:19:30.533894395 +0000 UTC m=+16.434392453" lastFinishedPulling="2025-11-01 10:19:36.783362149 +0000 UTC m=+22.683860207" observedRunningTime="2025-11-01 10:19:37.337032118 +0000 UTC m=+23.237530185" watchObservedRunningTime="2025-11-01 10:19:37.337586454 +0000 UTC m=+23.238084519"
	Nov 01 10:19:40 old-k8s-version-556573 kubelet[724]: I1101 10:19:40.510756     724 scope.go:117] "RemoveContainer" containerID="9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12"
	Nov 01 10:19:40 old-k8s-version-556573 kubelet[724]: E1101 10:19:40.511091     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xdjzs_kubernetes-dashboard(a38386b4-80d8-4037-8ca8-f9885dd37c2d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs" podUID="a38386b4-80d8-4037-8ca8-f9885dd37c2d"
	Nov 01 10:19:49 old-k8s-version-556573 kubelet[724]: I1101 10:19:49.357475     724 scope.go:117] "RemoveContainer" containerID="8fd6240f85ba7e33bc3cd42db7e4ecfbef506ccc7d5709f3945a260b4406ba64"
	Nov 01 10:19:52 old-k8s-version-556573 kubelet[724]: I1101 10:19:52.212312     724 scope.go:117] "RemoveContainer" containerID="9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12"
	Nov 01 10:19:52 old-k8s-version-556573 kubelet[724]: I1101 10:19:52.369450     724 scope.go:117] "RemoveContainer" containerID="9fef4db12aba93bdfec6181f6af18f44adfd1185043a9d0f8e41d1c01d294e12"
	Nov 01 10:19:52 old-k8s-version-556573 kubelet[724]: I1101 10:19:52.370091     724 scope.go:117] "RemoveContainer" containerID="1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc"
	Nov 01 10:19:52 old-k8s-version-556573 kubelet[724]: E1101 10:19:52.370595     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xdjzs_kubernetes-dashboard(a38386b4-80d8-4037-8ca8-f9885dd37c2d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs" podUID="a38386b4-80d8-4037-8ca8-f9885dd37c2d"
	Nov 01 10:20:00 old-k8s-version-556573 kubelet[724]: I1101 10:20:00.510233     724 scope.go:117] "RemoveContainer" containerID="1cca6171f6e63cab31d09aa8fa4b9d69f7f6e1ef72eaa2a00cccf28a86ac5bbc"
	Nov 01 10:20:00 old-k8s-version-556573 kubelet[724]: E1101 10:20:00.510712     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xdjzs_kubernetes-dashboard(a38386b4-80d8-4037-8ca8-f9885dd37c2d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xdjzs" podUID="a38386b4-80d8-4037-8ca8-f9885dd37c2d"
	Nov 01 10:20:03 old-k8s-version-556573 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:20:03 old-k8s-version-556573 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:20:03 old-k8s-version-556573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:20:03 old-k8s-version-556573 systemd[1]: kubelet.service: Consumed 1.575s CPU time.
	
	
	==> kubernetes-dashboard [60c3ea523dc7210a6abdb204c3151d0227b798a7fb181e25b264e4e9037ad6a7] <==
	2025/11/01 10:19:36 Starting overwatch
	2025/11/01 10:19:36 Using namespace: kubernetes-dashboard
	2025/11/01 10:19:36 Using in-cluster config to connect to apiserver
	2025/11/01 10:19:36 Using secret token for csrf signing
	2025/11/01 10:19:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:19:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:19:36 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 10:19:36 Generating JWE encryption key
	2025/11/01 10:19:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:19:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:19:36 Initializing JWE encryption key from synchronized object
	2025/11/01 10:19:36 Creating in-cluster Sidecar client
	2025/11/01 10:19:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:19:36 Serving insecurely on HTTP port: 9090
	2025/11/01 10:20:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8fd6240f85ba7e33bc3cd42db7e4ecfbef506ccc7d5709f3945a260b4406ba64] <==
	I1101 10:19:18.587712       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:19:48.590065       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [eb353e58c0fc17fac5140bb533292ff0eede9c2a117a3f00b2eda7320c1197f4] <==
	I1101 10:19:49.414095       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:19:49.421742       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:19:49.421783       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 10:20:06.817774       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:20:06.817919       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa58e27b-5340-4f47-971d-25a668ca76a2", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-556573_44a8a45b-0546-46bb-bacd-dc3136e956e8 became leader
	I1101 10:20:06.818016       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-556573_44a8a45b-0546-46bb-bacd-dc3136e956e8!
	I1101 10:20:06.918262       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-556573_44a8a45b-0546-46bb-bacd-dc3136e956e8!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556573 -n old-k8s-version-556573
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556573 -n old-k8s-version-556573: exit status 2 (343.650101ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-556573 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-680879 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-680879 --alsologtostderr -v=1: exit status 80 (1.820080483s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-680879 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:20:12.024074  759871 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:20:12.024365  759871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:20:12.024374  759871 out.go:374] Setting ErrFile to fd 2...
	I1101 10:20:12.024378  759871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:20:12.024580  759871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:20:12.024871  759871 out.go:368] Setting JSON to false
	I1101 10:20:12.024917  759871 mustload.go:66] Loading cluster: no-preload-680879
	I1101 10:20:12.025287  759871 config.go:182] Loaded profile config "no-preload-680879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:20:12.025698  759871 cli_runner.go:164] Run: docker container inspect no-preload-680879 --format={{.State.Status}}
	I1101 10:20:12.044902  759871 host.go:66] Checking if "no-preload-680879" exists ...
	I1101 10:20:12.045319  759871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:20:12.111967  759871 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:82 SystemTime:2025-11-01 10:20:12.100578223 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:20:12.112741  759871 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-680879 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:20:12.114722  759871 out.go:179] * Pausing node no-preload-680879 ... 
	I1101 10:20:12.115787  759871 host.go:66] Checking if "no-preload-680879" exists ...
	I1101 10:20:12.116102  759871 ssh_runner.go:195] Run: systemctl --version
	I1101 10:20:12.116145  759871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-680879
	I1101 10:20:12.135688  759871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/no-preload-680879/id_rsa Username:docker}
	I1101 10:20:12.244892  759871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:20:12.259204  759871 pause.go:52] kubelet running: true
	I1101 10:20:12.259280  759871 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:20:12.433057  759871 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:20:12.433172  759871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:20:12.515292  759871 cri.go:89] found id: "b3f88a77e7304ccb75255aa8f9a28ba16a587870acedd7ea1e77cab992e9b1c6"
	I1101 10:20:12.515319  759871 cri.go:89] found id: "e3ba237b72ca6ee06f319e033870694f92cf60ca5f13ea437a84519543088d72"
	I1101 10:20:12.515324  759871 cri.go:89] found id: "9a10b2e01aeb85081f2b04b5828d1dbf0e67fb066ec31ec791b84b4b18c9b593"
	I1101 10:20:12.515327  759871 cri.go:89] found id: "ccbb79c8e1a4843e1b7bf4000208cf9402222b013115b8daf2351a7173d3e409"
	I1101 10:20:12.515330  759871 cri.go:89] found id: "063de29478f6f9a5582fb458f3bff8cab5c5ea9ba472292512dba0334c2bf18b"
	I1101 10:20:12.515333  759871 cri.go:89] found id: "6fe1794e14c177d264a3e5610bef578069b247e5deb7054c93fb9a70b2ccf7ba"
	I1101 10:20:12.515335  759871 cri.go:89] found id: "a1a084abd5f06aa1899bd7372a8496c6c8eb79b98488279f9c9679a6c0338270"
	I1101 10:20:12.515338  759871 cri.go:89] found id: "8a355ad3dea63414c9311a3f417e38b58b4c399b8aa2b4497aea7e6cd9510af8"
	I1101 10:20:12.515340  759871 cri.go:89] found id: "be916f84dfad93d8e52891dd7a642ef5783afd3b0e1978d42fc11b92d8812a08"
	I1101 10:20:12.515346  759871 cri.go:89] found id: "f828032dc0b171472ab43cb16c9b5f1e248ee6710e23345aa7e3af0d4249a787"
	I1101 10:20:12.515348  759871 cri.go:89] found id: "10f3f9fd9deff3a9439579878660c06e4a23d5f18d25e273b1010876a5b9eb3d"
	I1101 10:20:12.515353  759871 cri.go:89] found id: ""
	I1101 10:20:12.515400  759871 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:20:12.528304  759871 retry.go:31] will retry after 147.544386ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:20:12Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:20:12.676738  759871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:20:12.691810  759871 pause.go:52] kubelet running: false
	I1101 10:20:12.691897  759871 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:20:12.833681  759871 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:20:12.833794  759871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:20:12.914992  759871 cri.go:89] found id: "b3f88a77e7304ccb75255aa8f9a28ba16a587870acedd7ea1e77cab992e9b1c6"
	I1101 10:20:12.915016  759871 cri.go:89] found id: "e3ba237b72ca6ee06f319e033870694f92cf60ca5f13ea437a84519543088d72"
	I1101 10:20:12.915019  759871 cri.go:89] found id: "9a10b2e01aeb85081f2b04b5828d1dbf0e67fb066ec31ec791b84b4b18c9b593"
	I1101 10:20:12.915023  759871 cri.go:89] found id: "ccbb79c8e1a4843e1b7bf4000208cf9402222b013115b8daf2351a7173d3e409"
	I1101 10:20:12.915026  759871 cri.go:89] found id: "063de29478f6f9a5582fb458f3bff8cab5c5ea9ba472292512dba0334c2bf18b"
	I1101 10:20:12.915035  759871 cri.go:89] found id: "6fe1794e14c177d264a3e5610bef578069b247e5deb7054c93fb9a70b2ccf7ba"
	I1101 10:20:12.915038  759871 cri.go:89] found id: "a1a084abd5f06aa1899bd7372a8496c6c8eb79b98488279f9c9679a6c0338270"
	I1101 10:20:12.915040  759871 cri.go:89] found id: "8a355ad3dea63414c9311a3f417e38b58b4c399b8aa2b4497aea7e6cd9510af8"
	I1101 10:20:12.915043  759871 cri.go:89] found id: "be916f84dfad93d8e52891dd7a642ef5783afd3b0e1978d42fc11b92d8812a08"
	I1101 10:20:12.915049  759871 cri.go:89] found id: "f828032dc0b171472ab43cb16c9b5f1e248ee6710e23345aa7e3af0d4249a787"
	I1101 10:20:12.915052  759871 cri.go:89] found id: "10f3f9fd9deff3a9439579878660c06e4a23d5f18d25e273b1010876a5b9eb3d"
	I1101 10:20:12.915054  759871 cri.go:89] found id: ""
	I1101 10:20:12.915093  759871 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:20:12.947881  759871 retry.go:31] will retry after 542.845469ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:20:12Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:20:13.491737  759871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:20:13.506992  759871 pause.go:52] kubelet running: false
	I1101 10:20:13.507058  759871 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:20:13.669049  759871 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:20:13.669146  759871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:20:13.749226  759871 cri.go:89] found id: "b3f88a77e7304ccb75255aa8f9a28ba16a587870acedd7ea1e77cab992e9b1c6"
	I1101 10:20:13.749266  759871 cri.go:89] found id: "e3ba237b72ca6ee06f319e033870694f92cf60ca5f13ea437a84519543088d72"
	I1101 10:20:13.749272  759871 cri.go:89] found id: "9a10b2e01aeb85081f2b04b5828d1dbf0e67fb066ec31ec791b84b4b18c9b593"
	I1101 10:20:13.749277  759871 cri.go:89] found id: "ccbb79c8e1a4843e1b7bf4000208cf9402222b013115b8daf2351a7173d3e409"
	I1101 10:20:13.749281  759871 cri.go:89] found id: "063de29478f6f9a5582fb458f3bff8cab5c5ea9ba472292512dba0334c2bf18b"
	I1101 10:20:13.749289  759871 cri.go:89] found id: "6fe1794e14c177d264a3e5610bef578069b247e5deb7054c93fb9a70b2ccf7ba"
	I1101 10:20:13.749293  759871 cri.go:89] found id: "a1a084abd5f06aa1899bd7372a8496c6c8eb79b98488279f9c9679a6c0338270"
	I1101 10:20:13.749297  759871 cri.go:89] found id: "8a355ad3dea63414c9311a3f417e38b58b4c399b8aa2b4497aea7e6cd9510af8"
	I1101 10:20:13.749301  759871 cri.go:89] found id: "be916f84dfad93d8e52891dd7a642ef5783afd3b0e1978d42fc11b92d8812a08"
	I1101 10:20:13.749321  759871 cri.go:89] found id: "f828032dc0b171472ab43cb16c9b5f1e248ee6710e23345aa7e3af0d4249a787"
	I1101 10:20:13.749329  759871 cri.go:89] found id: "10f3f9fd9deff3a9439579878660c06e4a23d5f18d25e273b1010876a5b9eb3d"
	I1101 10:20:13.749333  759871 cri.go:89] found id: ""
	I1101 10:20:13.749387  759871 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:20:13.766621  759871 out.go:203] 
	W1101 10:20:13.767775  759871 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:20:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:20:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:20:13.767805  759871 out.go:285] * 
	* 
	W1101 10:20:13.774397  759871 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:20:13.775731  759871 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-680879 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-680879
helpers_test.go:243: (dbg) docker inspect no-preload-680879:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48",
	        "Created": "2025-11-01T10:17:55.281485116Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 752010,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:19:14.194449167Z",
	            "FinishedAt": "2025-11-01T10:19:13.195263083Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48/hostname",
	        "HostsPath": "/var/lib/docker/containers/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48/hosts",
	        "LogPath": "/var/lib/docker/containers/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48-json.log",
	        "Name": "/no-preload-680879",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-680879:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-680879",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48",
	                "LowerDir": "/var/lib/docker/overlay2/851744e87e484e042cd1c2bc342874a85acae0c6d3effc243aa6ce3e70fb73e1-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/851744e87e484e042cd1c2bc342874a85acae0c6d3effc243aa6ce3e70fb73e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/851744e87e484e042cd1c2bc342874a85acae0c6d3effc243aa6ce3e70fb73e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/851744e87e484e042cd1c2bc342874a85acae0c6d3effc243aa6ce3e70fb73e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-680879",
	                "Source": "/var/lib/docker/volumes/no-preload-680879/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-680879",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-680879",
	                "name.minikube.sigs.k8s.io": "no-preload-680879",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f06a664d779d3330349feba3d88609d4fbc5691e2bd76b6885b8f106aff0fe0",
	            "SandboxKey": "/var/run/docker/netns/2f06a664d779",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-680879": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:70:1c:0a:58:bd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "11522e762cf9612c2344c4fb5a0996d332b23497f30d211d4b6878b748af077f",
	                    "EndpointID": "e48c66ad338e075b3873f784705d2fa3e80451cad542dff9f401fffc21039b3a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-680879",
	                        "bdead49b30b3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680879 -n no-preload-680879
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680879 -n no-preload-680879: exit status 2 (411.245223ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-680879 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-680879 logs -n 25: (1.298809644s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-949166 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p NoKubernetes-194729 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ stop    │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p NoKubernetes-194729 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ ssh     │ -p NoKubernetes-194729 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ delete  │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:18 UTC │
	│ ssh     │ force-systemd-flag-767379 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ delete  │ -p force-systemd-flag-767379                                                                                                                                                                                                                  │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-556573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ stop    │ -p old-k8s-version-556573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-680879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ stop    │ -p no-preload-680879 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-556573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p no-preload-680879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ old-k8s-version-556573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p old-k8s-version-556573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ no-preload-680879 image list --format=json                                                                                                                                                                                                    │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p no-preload-680879 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p embed-certs-678014 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-678014        │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:20:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:20:13.401696  760328 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:20:13.401952  760328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:20:13.401963  760328 out.go:374] Setting ErrFile to fd 2...
	I1101 10:20:13.401968  760328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:20:13.402253  760328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:20:13.402824  760328 out.go:368] Setting JSON to false
	I1101 10:20:13.404194  760328 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10950,"bootTime":1761981463,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:20:13.404294  760328 start.go:143] virtualization: kvm guest
	I1101 10:20:13.406200  760328 out.go:179] * [embed-certs-678014] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:20:13.407376  760328 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:20:13.407405  760328 notify.go:221] Checking for updates...
	I1101 10:20:13.409552  760328 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:20:13.410714  760328 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:20:13.411806  760328 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:20:13.412775  760328 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:20:13.413738  760328 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:20:13.415322  760328 config.go:182] Loaded profile config "cert-expiration-577441": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:20:13.415440  760328 config.go:182] Loaded profile config "kubernetes-upgrade-949166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:20:13.415600  760328 config.go:182] Loaded profile config "no-preload-680879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:20:13.415720  760328 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:20:13.439964  760328 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:20:13.440095  760328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:20:13.504071  760328 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:20:13.49253215 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:20:13.504190  760328 docker.go:319] overlay module found
	I1101 10:20:13.505941  760328 out.go:179] * Using the docker driver based on user configuration
	I1101 10:20:13.507040  760328 start.go:309] selected driver: docker
	I1101 10:20:13.507058  760328 start.go:930] validating driver "docker" against <nil>
	I1101 10:20:13.507074  760328 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:20:13.507798  760328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:20:13.580459  760328 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:20:13.569649506 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:20:13.580608  760328 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:20:13.580814  760328 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:20:13.582472  760328 out.go:179] * Using Docker driver with root privileges
	I1101 10:20:13.583580  760328 cni.go:84] Creating CNI manager for ""
	I1101 10:20:13.583664  760328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:20:13.583676  760328 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:20:13.583762  760328 start.go:353] cluster config:
	{Name:embed-certs-678014 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-678014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:20:13.588972  760328 out.go:179] * Starting "embed-certs-678014" primary control-plane node in "embed-certs-678014" cluster
	I1101 10:20:13.590057  760328 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:20:13.591077  760328 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:20:13.592045  760328 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:20:13.592104  760328 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:20:13.592124  760328 cache.go:59] Caching tarball of preloaded images
	I1101 10:20:13.592169  760328 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:20:13.592249  760328 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:20:13.592264  760328 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:20:13.592391  760328 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/embed-certs-678014/config.json ...
	I1101 10:20:13.592413  760328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/embed-certs-678014/config.json: {Name:mk36e3e3abc5a2547332a3b70107af374e4a06a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:13.614641  760328 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:20:13.614667  760328 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:20:13.614688  760328 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:20:13.614723  760328 start.go:360] acquireMachinesLock for embed-certs-678014: {Name:mkdb75ea98aa522a9491180dc21f0d42e5d5a627 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:20:13.614862  760328 start.go:364] duration metric: took 93.62µs to acquireMachinesLock for "embed-certs-678014"
	I1101 10:20:13.614901  760328 start.go:93] Provisioning new machine with config: &{Name:embed-certs-678014 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-678014 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:20:13.614994  760328 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 01 10:19:34 no-preload-680879 crio[566]: time="2025-11-01T10:19:34.71210514Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:19:34 no-preload-680879 crio[566]: time="2025-11-01T10:19:34.716700659Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:19:34 no-preload-680879 crio[566]: time="2025-11-01T10:19:34.716734342Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.843046227Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=34f58433-7f8d-4503-b622-10a4f573c593 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.843889089Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e5fc515b-fa30-4a44-90bb-e7ea51b5fbef name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.844779613Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph/dashboard-metrics-scraper" id=58beffe8-d91a-470b-85ae-112680b4a02a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.844924468Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.85217659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.852629721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.88468346Z" level=info msg="Created container f828032dc0b171472ab43cb16c9b5f1e248ee6710e23345aa7e3af0d4249a787: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph/dashboard-metrics-scraper" id=58beffe8-d91a-470b-85ae-112680b4a02a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.885382301Z" level=info msg="Starting container: f828032dc0b171472ab43cb16c9b5f1e248ee6710e23345aa7e3af0d4249a787" id=b750bc24-e15f-4e9b-8d69-5ea934f106c7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.887217765Z" level=info msg="Started container" PID=1747 containerID=f828032dc0b171472ab43cb16c9b5f1e248ee6710e23345aa7e3af0d4249a787 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph/dashboard-metrics-scraper id=b750bc24-e15f-4e9b-8d69-5ea934f106c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d77c5464343a277de7876defc6c7f27c493e31699f8e951977373d2e673b014
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.958502984Z" level=info msg="Removing container: bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd" id=4c76d127-ff17-48e6-ab3e-ff5fa714cafb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.968605123Z" level=info msg="Removed container bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph/dashboard-metrics-scraper" id=4c76d127-ff17-48e6-ab3e-ff5fa714cafb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.970725251Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c6ac34d8-a101-4bef-b33c-e6db2dc3ba9c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.971867427Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e54e979d-2bbc-4bcc-9c29-bcba17d32e40 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.973047858Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b4ccefec-8614-4567-b5bc-cd1936cec3b7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.973211698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.978176913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.97839409Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0528660d94e215c527446391647c5c1c25e1c0f3fd1d9a7114bd076da1749ee2/merged/etc/passwd: no such file or directory"
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.978491219Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0528660d94e215c527446391647c5c1c25e1c0f3fd1d9a7114bd076da1749ee2/merged/etc/group: no such file or directory"
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.978784001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:55 no-preload-680879 crio[566]: time="2025-11-01T10:19:55.008871766Z" level=info msg="Created container b3f88a77e7304ccb75255aa8f9a28ba16a587870acedd7ea1e77cab992e9b1c6: kube-system/storage-provisioner/storage-provisioner" id=b4ccefec-8614-4567-b5bc-cd1936cec3b7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:55 no-preload-680879 crio[566]: time="2025-11-01T10:19:55.009571828Z" level=info msg="Starting container: b3f88a77e7304ccb75255aa8f9a28ba16a587870acedd7ea1e77cab992e9b1c6" id=71d593c7-6b5e-490e-b5bd-2370960375c8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:19:55 no-preload-680879 crio[566]: time="2025-11-01T10:19:55.011761298Z" level=info msg="Started container" PID=1761 containerID=b3f88a77e7304ccb75255aa8f9a28ba16a587870acedd7ea1e77cab992e9b1c6 description=kube-system/storage-provisioner/storage-provisioner id=71d593c7-6b5e-490e-b5bd-2370960375c8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b0ffb81b149716527f4b8d1821ec520c280028139db3bbdf1268d418c65f14fc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b3f88a77e7304       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   b0ffb81b14971       storage-provisioner                          kube-system
	f828032dc0b17       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   8d77c5464343a       dashboard-metrics-scraper-6ffb444bf9-f9mph   kubernetes-dashboard
	10f3f9fd9deff       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   596feda1b06e1       kubernetes-dashboard-855c9754f9-6hkgl        kubernetes-dashboard
	e3ba237b72ca6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   fa097bd82f740       coredns-66bc5c9577-rh4z7                     kube-system
	e9822ca6642dd       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   85d0455b76cfa       busybox                                      default
	9a10b2e01aeb8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   35ebaebc66d0d       kube-proxy-ft2vw                             kube-system
	ccbb79c8e1a48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   b0ffb81b14971       storage-provisioner                          kube-system
	063de29478f6f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   c64d0241d78e4       kindnet-sjzlx                                kube-system
	6fe1794e14c17       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   7e15f8a0c5af9       kube-controller-manager-no-preload-680879    kube-system
	a1a084abd5f06       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   faf28a8999560       kube-apiserver-no-preload-680879             kube-system
	8a355ad3dea63       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   35b313908bb0d       etcd-no-preload-680879                       kube-system
	be916f84dfad9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   e944298f921ec       kube-scheduler-no-preload-680879             kube-system
	
	
	==> coredns [e3ba237b72ca6ee06f319e033870694f92cf60ca5f13ea437a84519543088d72] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34316 - 32450 "HINFO IN 4549008030702271427.4458488138960966621. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033793092s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-680879
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-680879
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=no-preload-680879
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_18_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:18:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-680879
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:20:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:19:53 +0000   Sat, 01 Nov 2025 10:18:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:19:53 +0000   Sat, 01 Nov 2025 10:18:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:19:53 +0000   Sat, 01 Nov 2025 10:18:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:19:53 +0000   Sat, 01 Nov 2025 10:19:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-680879
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                60389b87-92db-45cc-8d8b-f8362e2caec7
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-rh4z7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-no-preload-680879                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-sjzlx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-680879              250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-no-preload-680879     200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-ft2vw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-680879              100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-f9mph    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6hkgl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node no-preload-680879 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node no-preload-680879 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node no-preload-680879 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node no-preload-680879 event: Registered Node no-preload-680879 in Controller
	  Normal  NodeReady                93s                kubelet          Node no-preload-680879 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node no-preload-680879 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node no-preload-680879 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node no-preload-680879 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node no-preload-680879 event: Registered Node no-preload-680879 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [8a355ad3dea63414c9311a3f417e38b58b4c399b8aa2b4497aea7e6cd9510af8] <==
	{"level":"warn","ts":"2025-11-01T10:19:22.513681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.524176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.531421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.538538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.546600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.554598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.562534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.570347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.578366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.591023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.598863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.615052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.621703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.630441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.637659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.645918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.653710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.661055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.669411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.677305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.684951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.705709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.712567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.720641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.771640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52114","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:20:15 up  3:02,  0 user,  load average: 2.34, 3.22, 2.68
	Linux no-preload-680879 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [063de29478f6f9a5582fb458f3bff8cab5c5ea9ba472292512dba0334c2bf18b] <==
	I1101 10:19:24.438340       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:19:24.438656       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:19:24.438827       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:19:24.438861       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:19:24.438886       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:19:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:19:24.690242       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:19:24.690273       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:19:24.690286       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:19:24.690718       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:19:25.190493       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:19:25.190526       1 metrics.go:72] Registering metrics
	I1101 10:19:25.190631       1 controller.go:711] "Syncing nftables rules"
	I1101 10:19:34.690950       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:19:34.691016       1 main.go:301] handling current node
	I1101 10:19:44.694028       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:19:44.694069       1 main.go:301] handling current node
	I1101 10:19:54.691064       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:19:54.691093       1 main.go:301] handling current node
	I1101 10:20:04.693257       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:20:04.693292       1 main.go:301] handling current node
	I1101 10:20:14.698976       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:20:14.699003       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a1a084abd5f06aa1899bd7372a8496c6c8eb79b98488279f9c9679a6c0338270] <==
	I1101 10:19:23.279563       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:19:23.279704       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:19:23.279774       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:19:23.279778       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:19:23.279791       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:19:23.281506       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:19:23.281642       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:19:23.282411       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:19:23.282571       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:19:23.284098       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:19:23.287957       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:19:23.298615       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:19:23.298657       1 policy_source.go:240] refreshing policies
	I1101 10:19:23.395881       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:19:23.551274       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:19:23.582807       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:19:23.605295       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:19:23.613467       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:19:23.621935       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:19:23.658712       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.171.158"}
	I1101 10:19:23.670310       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.23.24"}
	I1101 10:19:24.187395       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:19:26.993086       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:19:27.040863       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:19:27.091799       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6fe1794e14c177d264a3e5610bef578069b247e5deb7054c93fb9a70b2ccf7ba] <==
	I1101 10:19:26.602899       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:19:26.605159       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:19:26.607376       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:19:26.610586       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:19:26.613856       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:19:26.617098       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:19:26.618392       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:19:26.637538       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:19:26.637566       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:19:26.637596       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:19:26.637617       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:19:26.637641       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:19:26.637746       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:19:26.637823       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-680879"
	I1101 10:19:26.637865       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:19:26.637884       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:19:26.637962       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:19:26.638034       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:19:26.638465       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:19:26.643414       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:19:26.644590       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:19:26.644607       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:19:26.644613       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:19:26.647274       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:19:26.661570       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9a10b2e01aeb85081f2b04b5828d1dbf0e67fb066ec31ec791b84b4b18c9b593] <==
	I1101 10:19:24.244663       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:19:24.317164       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:19:24.418165       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:19:24.418203       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:19:24.418323       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:19:24.437526       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:19:24.437577       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:19:24.443490       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:19:24.443980       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:19:24.444010       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:19:24.445003       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:19:24.445028       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:19:24.445026       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:19:24.445042       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:19:24.445056       1 config.go:309] "Starting node config controller"
	I1101 10:19:24.445066       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:19:24.445174       1 config.go:200] "Starting service config controller"
	I1101 10:19:24.445257       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:19:24.545697       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:19:24.545729       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:19:24.545736       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:19:24.545712       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [be916f84dfad93d8e52891dd7a642ef5783afd3b0e1978d42fc11b92d8812a08] <==
	I1101 10:19:21.989598       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:19:23.242774       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:19:23.242806       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:19:23.250113       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:19:23.250178       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:19:23.250223       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:19:23.250233       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:19:23.250249       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:19:23.250257       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:19:23.251266       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:19:23.251347       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:19:23.350639       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:19:23.350654       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:19:23.350648       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:19:27 no-preload-680879 kubelet[713]: I1101 10:19:27.427758     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8f900f90-a9a1-4eed-850a-436ba6064cd9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-f9mph\" (UID: \"8f900f90-a9a1-4eed-850a-436ba6064cd9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph"
	Nov 01 10:19:27 no-preload-680879 kubelet[713]: I1101 10:19:27.427790     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5k9p\" (UniqueName: \"kubernetes.io/projected/f7ef4e23-14fd-41d1-a72b-4107d31b74a9-kube-api-access-h5k9p\") pod \"kubernetes-dashboard-855c9754f9-6hkgl\" (UID: \"f7ef4e23-14fd-41d1-a72b-4107d31b74a9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6hkgl"
	Nov 01 10:19:27 no-preload-680879 kubelet[713]: I1101 10:19:27.427852     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f7ef4e23-14fd-41d1-a72b-4107d31b74a9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-6hkgl\" (UID: \"f7ef4e23-14fd-41d1-a72b-4107d31b74a9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6hkgl"
	Nov 01 10:19:28 no-preload-680879 kubelet[713]: I1101 10:19:28.516047     713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:19:29 no-preload-680879 kubelet[713]: I1101 10:19:29.893334     713 scope.go:117] "RemoveContainer" containerID="6d0aa525c52aeb301c249bad65fa02461768e4a1ca506a75a5771f061d491074"
	Nov 01 10:19:30 no-preload-680879 kubelet[713]: I1101 10:19:30.899192     713 scope.go:117] "RemoveContainer" containerID="6d0aa525c52aeb301c249bad65fa02461768e4a1ca506a75a5771f061d491074"
	Nov 01 10:19:30 no-preload-680879 kubelet[713]: I1101 10:19:30.899407     713 scope.go:117] "RemoveContainer" containerID="bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd"
	Nov 01 10:19:30 no-preload-680879 kubelet[713]: E1101 10:19:30.899642     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f9mph_kubernetes-dashboard(8f900f90-a9a1-4eed-850a-436ba6064cd9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph" podUID="8f900f90-a9a1-4eed-850a-436ba6064cd9"
	Nov 01 10:19:31 no-preload-680879 kubelet[713]: I1101 10:19:31.904308     713 scope.go:117] "RemoveContainer" containerID="bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd"
	Nov 01 10:19:31 no-preload-680879 kubelet[713]: E1101 10:19:31.904521     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f9mph_kubernetes-dashboard(8f900f90-a9a1-4eed-850a-436ba6064cd9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph" podUID="8f900f90-a9a1-4eed-850a-436ba6064cd9"
	Nov 01 10:19:33 no-preload-680879 kubelet[713]: I1101 10:19:33.921560     713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6hkgl" podStartSLOduration=0.707343216 podStartE2EDuration="6.921539295s" podCreationTimestamp="2025-11-01 10:19:27 +0000 UTC" firstStartedPulling="2025-11-01 10:19:27.59178106 +0000 UTC m=+6.838688144" lastFinishedPulling="2025-11-01 10:19:33.805977138 +0000 UTC m=+13.052884223" observedRunningTime="2025-11-01 10:19:33.921195589 +0000 UTC m=+13.168102694" watchObservedRunningTime="2025-11-01 10:19:33.921539295 +0000 UTC m=+13.168446398"
	Nov 01 10:19:39 no-preload-680879 kubelet[713]: I1101 10:19:39.691408     713 scope.go:117] "RemoveContainer" containerID="bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd"
	Nov 01 10:19:39 no-preload-680879 kubelet[713]: E1101 10:19:39.691670     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f9mph_kubernetes-dashboard(8f900f90-a9a1-4eed-850a-436ba6064cd9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph" podUID="8f900f90-a9a1-4eed-850a-436ba6064cd9"
	Nov 01 10:19:50 no-preload-680879 kubelet[713]: I1101 10:19:50.842612     713 scope.go:117] "RemoveContainer" containerID="bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd"
	Nov 01 10:19:50 no-preload-680879 kubelet[713]: I1101 10:19:50.957091     713 scope.go:117] "RemoveContainer" containerID="bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd"
	Nov 01 10:19:50 no-preload-680879 kubelet[713]: I1101 10:19:50.957332     713 scope.go:117] "RemoveContainer" containerID="f828032dc0b171472ab43cb16c9b5f1e248ee6710e23345aa7e3af0d4249a787"
	Nov 01 10:19:50 no-preload-680879 kubelet[713]: E1101 10:19:50.957540     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f9mph_kubernetes-dashboard(8f900f90-a9a1-4eed-850a-436ba6064cd9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph" podUID="8f900f90-a9a1-4eed-850a-436ba6064cd9"
	Nov 01 10:19:54 no-preload-680879 kubelet[713]: I1101 10:19:54.970297     713 scope.go:117] "RemoveContainer" containerID="ccbb79c8e1a4843e1b7bf4000208cf9402222b013115b8daf2351a7173d3e409"
	Nov 01 10:19:59 no-preload-680879 kubelet[713]: I1101 10:19:59.691058     713 scope.go:117] "RemoveContainer" containerID="f828032dc0b171472ab43cb16c9b5f1e248ee6710e23345aa7e3af0d4249a787"
	Nov 01 10:19:59 no-preload-680879 kubelet[713]: E1101 10:19:59.691252     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f9mph_kubernetes-dashboard(8f900f90-a9a1-4eed-850a-436ba6064cd9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph" podUID="8f900f90-a9a1-4eed-850a-436ba6064cd9"
	Nov 01 10:20:12 no-preload-680879 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:20:12 no-preload-680879 kubelet[713]: I1101 10:20:12.408747     713 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 10:20:12 no-preload-680879 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:20:12 no-preload-680879 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:20:12 no-preload-680879 systemd[1]: kubelet.service: Consumed 1.680s CPU time.
	
	
	==> kubernetes-dashboard [10f3f9fd9deff3a9439579878660c06e4a23d5f18d25e273b1010876a5b9eb3d] <==
	2025/11/01 10:19:33 Starting overwatch
	2025/11/01 10:19:33 Using namespace: kubernetes-dashboard
	2025/11/01 10:19:33 Using in-cluster config to connect to apiserver
	2025/11/01 10:19:33 Using secret token for csrf signing
	2025/11/01 10:19:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:19:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:19:33 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:19:33 Generating JWE encryption key
	2025/11/01 10:19:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:19:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:19:33 Initializing JWE encryption key from synchronized object
	2025/11/01 10:19:33 Creating in-cluster Sidecar client
	2025/11/01 10:19:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:19:33 Serving insecurely on HTTP port: 9090
	2025/11/01 10:20:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [b3f88a77e7304ccb75255aa8f9a28ba16a587870acedd7ea1e77cab992e9b1c6] <==
	I1101 10:19:55.023613       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:19:55.030351       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:19:55.030390       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:19:55.032638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:58.488298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:02.749168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:06.347241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:09.400997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:12.423058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:12.427635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:20:12.427828       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:20:12.427982       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6660dd7f-bed9-45cf-892b-1e6435b24faf", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-680879_47b07d80-62c7-4fbb-9af2-f3b0ccab4139 became leader
	I1101 10:20:12.428044       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-680879_47b07d80-62c7-4fbb-9af2-f3b0ccab4139!
	W1101 10:20:12.430608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:12.437918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:20:12.528284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-680879_47b07d80-62c7-4fbb-9af2-f3b0ccab4139!
	W1101 10:20:14.441558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:14.446552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ccbb79c8e1a4843e1b7bf4000208cf9402222b013115b8daf2351a7173d3e409] <==
	I1101 10:19:24.205055       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:19:54.209254       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-680879 -n no-preload-680879
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-680879 -n no-preload-680879: exit status 2 (373.231105ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-680879 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-680879
helpers_test.go:243: (dbg) docker inspect no-preload-680879:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48",
	        "Created": "2025-11-01T10:17:55.281485116Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 752010,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:19:14.194449167Z",
	            "FinishedAt": "2025-11-01T10:19:13.195263083Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48/hostname",
	        "HostsPath": "/var/lib/docker/containers/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48/hosts",
	        "LogPath": "/var/lib/docker/containers/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48/bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48-json.log",
	        "Name": "/no-preload-680879",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-680879:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-680879",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bdead49b30b390541af32dfaf37cc0b08c6e1e3e131249edec612a1022c14d48",
	                "LowerDir": "/var/lib/docker/overlay2/851744e87e484e042cd1c2bc342874a85acae0c6d3effc243aa6ce3e70fb73e1-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/851744e87e484e042cd1c2bc342874a85acae0c6d3effc243aa6ce3e70fb73e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/851744e87e484e042cd1c2bc342874a85acae0c6d3effc243aa6ce3e70fb73e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/851744e87e484e042cd1c2bc342874a85acae0c6d3effc243aa6ce3e70fb73e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-680879",
	                "Source": "/var/lib/docker/volumes/no-preload-680879/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-680879",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-680879",
	                "name.minikube.sigs.k8s.io": "no-preload-680879",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f06a664d779d3330349feba3d88609d4fbc5691e2bd76b6885b8f106aff0fe0",
	            "SandboxKey": "/var/run/docker/netns/2f06a664d779",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-680879": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:70:1c:0a:58:bd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "11522e762cf9612c2344c4fb5a0996d332b23497f30d211d4b6878b748af077f",
	                    "EndpointID": "e48c66ad338e075b3873f784705d2fa3e80451cad542dff9f401fffc21039b3a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-680879",
	                        "bdead49b30b3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680879 -n no-preload-680879
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680879 -n no-preload-680879: exit status 2 (355.669221ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-680879 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-680879 logs -n 25: (3.005790454s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-949166 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ ssh     │ -p NoKubernetes-194729 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ stop    │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p NoKubernetes-194729 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ ssh     │ -p NoKubernetes-194729 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │                     │
	│ delete  │ -p NoKubernetes-194729                                                                                                                                                                                                                        │ NoKubernetes-194729       │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:18 UTC │
	│ ssh     │ force-systemd-flag-767379 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ delete  │ -p force-systemd-flag-767379                                                                                                                                                                                                                  │ force-systemd-flag-767379 │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-556573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ stop    │ -p old-k8s-version-556573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-680879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ stop    │ -p no-preload-680879 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-556573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p no-preload-680879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ old-k8s-version-556573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p old-k8s-version-556573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ no-preload-680879 image list --format=json                                                                                                                                                                                                    │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p no-preload-680879 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-680879         │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573    │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p embed-certs-678014 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-678014        │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:20:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:20:13.401696  760328 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:20:13.401952  760328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:20:13.401963  760328 out.go:374] Setting ErrFile to fd 2...
	I1101 10:20:13.401968  760328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:20:13.402253  760328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:20:13.402824  760328 out.go:368] Setting JSON to false
	I1101 10:20:13.404194  760328 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10950,"bootTime":1761981463,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:20:13.404294  760328 start.go:143] virtualization: kvm guest
	I1101 10:20:13.406200  760328 out.go:179] * [embed-certs-678014] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:20:13.407376  760328 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:20:13.407405  760328 notify.go:221] Checking for updates...
	I1101 10:20:13.409552  760328 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:20:13.410714  760328 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:20:13.411806  760328 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:20:13.412775  760328 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:20:13.413738  760328 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:20:13.415322  760328 config.go:182] Loaded profile config "cert-expiration-577441": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:20:13.415440  760328 config.go:182] Loaded profile config "kubernetes-upgrade-949166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:20:13.415600  760328 config.go:182] Loaded profile config "no-preload-680879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:20:13.415720  760328 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:20:13.439964  760328 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:20:13.440095  760328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:20:13.504071  760328 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:20:13.49253215 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:20:13.504190  760328 docker.go:319] overlay module found
	I1101 10:20:13.505941  760328 out.go:179] * Using the docker driver based on user configuration
	I1101 10:20:13.507040  760328 start.go:309] selected driver: docker
	I1101 10:20:13.507058  760328 start.go:930] validating driver "docker" against <nil>
	I1101 10:20:13.507074  760328 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:20:13.507798  760328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:20:13.580459  760328 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:20:13.569649506 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:20:13.580608  760328 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:20:13.580814  760328 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:20:13.582472  760328 out.go:179] * Using Docker driver with root privileges
	I1101 10:20:13.583580  760328 cni.go:84] Creating CNI manager for ""
	I1101 10:20:13.583664  760328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:20:13.583676  760328 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:20:13.583762  760328 start.go:353] cluster config:
	{Name:embed-certs-678014 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-678014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:20:13.588972  760328 out.go:179] * Starting "embed-certs-678014" primary control-plane node in "embed-certs-678014" cluster
	I1101 10:20:13.590057  760328 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:20:13.591077  760328 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:20:13.592045  760328 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:20:13.592104  760328 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:20:13.592124  760328 cache.go:59] Caching tarball of preloaded images
	I1101 10:20:13.592169  760328 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:20:13.592249  760328 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:20:13.592264  760328 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:20:13.592391  760328 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/embed-certs-678014/config.json ...
	I1101 10:20:13.592413  760328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/embed-certs-678014/config.json: {Name:mk36e3e3abc5a2547332a3b70107af374e4a06a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:13.614641  760328 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:20:13.614667  760328 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:20:13.614688  760328 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:20:13.614723  760328 start.go:360] acquireMachinesLock for embed-certs-678014: {Name:mkdb75ea98aa522a9491180dc21f0d42e5d5a627 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:20:13.614862  760328 start.go:364] duration metric: took 93.62µs to acquireMachinesLock for "embed-certs-678014"
	I1101 10:20:13.614901  760328 start.go:93] Provisioning new machine with config: &{Name:embed-certs-678014 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-678014 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:20:13.614994  760328 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:20:10.937769  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:20:10.938279  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:20:10.938345  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:20:10.938431  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:20:10.967832  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:20:10.967870  734517 cri.go:89] found id: ""
	I1101 10:20:10.967882  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:20:10.967938  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:10.972032  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:20:10.972113  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:20:11.000702  734517 cri.go:89] found id: ""
	I1101 10:20:11.000733  734517 logs.go:282] 0 containers: []
	W1101 10:20:11.000743  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:20:11.000751  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:20:11.000814  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:20:11.029936  734517 cri.go:89] found id: ""
	I1101 10:20:11.029974  734517 logs.go:282] 0 containers: []
	W1101 10:20:11.029985  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:20:11.029994  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:20:11.030056  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:20:11.057716  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:11.057738  734517 cri.go:89] found id: ""
	I1101 10:20:11.057747  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:20:11.057800  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:11.061813  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:20:11.061899  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:20:11.091294  734517 cri.go:89] found id: ""
	I1101 10:20:11.091321  734517 logs.go:282] 0 containers: []
	W1101 10:20:11.091330  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:20:11.091336  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:20:11.091394  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:20:11.119884  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:11.119907  734517 cri.go:89] found id: ""
	I1101 10:20:11.119915  734517 logs.go:282] 1 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718]
	I1101 10:20:11.119967  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:11.124212  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:20:11.124281  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:20:11.152966  734517 cri.go:89] found id: ""
	I1101 10:20:11.152996  734517 logs.go:282] 0 containers: []
	W1101 10:20:11.153007  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:20:11.153015  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:20:11.153082  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:20:11.183507  734517 cri.go:89] found id: ""
	I1101 10:20:11.183540  734517 logs.go:282] 0 containers: []
	W1101 10:20:11.183552  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:20:11.183566  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:20:11.183583  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:20:11.243642  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:20:11.243669  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:20:11.243689  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:20:11.280522  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:20:11.280557  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:11.331394  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:20:11.331444  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:11.362950  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:20:11.362981  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:20:11.419848  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:20:11.419888  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:20:11.453100  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:20:11.453133  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:20:11.540732  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:20:11.540776  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:20:14.062014  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:20:14.062528  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:20:14.062588  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:20:14.062638  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:20:14.101463  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:20:14.101486  734517 cri.go:89] found id: ""
	I1101 10:20:14.101495  734517 logs.go:282] 1 containers: [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:20:14.101543  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:14.107273  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:20:14.107523  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:20:14.147826  734517 cri.go:89] found id: ""
	I1101 10:20:14.147874  734517 logs.go:282] 0 containers: []
	W1101 10:20:14.147885  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:20:14.147893  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:20:14.147959  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:20:14.195281  734517 cri.go:89] found id: ""
	I1101 10:20:14.195313  734517 logs.go:282] 0 containers: []
	W1101 10:20:14.195324  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:20:14.195332  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:20:14.195395  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:20:14.234771  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:14.234794  734517 cri.go:89] found id: ""
	I1101 10:20:14.234803  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:20:14.234876  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:14.239514  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:20:14.239592  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:20:14.273257  734517 cri.go:89] found id: ""
	I1101 10:20:14.273285  734517 logs.go:282] 0 containers: []
	W1101 10:20:14.273296  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:20:14.273303  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:20:14.273367  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:20:14.310488  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:14.310512  734517 cri.go:89] found id: ""
	I1101 10:20:14.310524  734517 logs.go:282] 1 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718]
	I1101 10:20:14.310585  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:14.315270  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:20:14.315336  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:20:14.348199  734517 cri.go:89] found id: ""
	I1101 10:20:14.348231  734517 logs.go:282] 0 containers: []
	W1101 10:20:14.348243  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:20:14.348252  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:20:14.348317  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:20:14.394486  734517 cri.go:89] found id: ""
	I1101 10:20:14.394517  734517 logs.go:282] 0 containers: []
	W1101 10:20:14.394528  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:20:14.394542  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:20:14.394556  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:20:14.465622  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:20:14.465660  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:20:14.502599  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:20:14.502631  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:20:14.608457  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:20:14.608507  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:20:14.629664  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:20:14.629699  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:20:14.700720  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:20:14.700744  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:20:14.700772  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:20:14.739583  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:20:14.739620  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:14.805750  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:20:14.805807  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	
	
	==> CRI-O <==
	Nov 01 10:19:34 no-preload-680879 crio[566]: time="2025-11-01T10:19:34.71210514Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:19:34 no-preload-680879 crio[566]: time="2025-11-01T10:19:34.716700659Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:19:34 no-preload-680879 crio[566]: time="2025-11-01T10:19:34.716734342Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.843046227Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=34f58433-7f8d-4503-b622-10a4f573c593 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.843889089Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e5fc515b-fa30-4a44-90bb-e7ea51b5fbef name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.844779613Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph/dashboard-metrics-scraper" id=58beffe8-d91a-470b-85ae-112680b4a02a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.844924468Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.85217659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.852629721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.88468346Z" level=info msg="Created container f828032dc0b171472ab43cb16c9b5f1e248ee6710e23345aa7e3af0d4249a787: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph/dashboard-metrics-scraper" id=58beffe8-d91a-470b-85ae-112680b4a02a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.885382301Z" level=info msg="Starting container: f828032dc0b171472ab43cb16c9b5f1e248ee6710e23345aa7e3af0d4249a787" id=b750bc24-e15f-4e9b-8d69-5ea934f106c7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.887217765Z" level=info msg="Started container" PID=1747 containerID=f828032dc0b171472ab43cb16c9b5f1e248ee6710e23345aa7e3af0d4249a787 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph/dashboard-metrics-scraper id=b750bc24-e15f-4e9b-8d69-5ea934f106c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d77c5464343a277de7876defc6c7f27c493e31699f8e951977373d2e673b014
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.958502984Z" level=info msg="Removing container: bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd" id=4c76d127-ff17-48e6-ab3e-ff5fa714cafb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:19:50 no-preload-680879 crio[566]: time="2025-11-01T10:19:50.968605123Z" level=info msg="Removed container bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph/dashboard-metrics-scraper" id=4c76d127-ff17-48e6-ab3e-ff5fa714cafb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.970725251Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c6ac34d8-a101-4bef-b33c-e6db2dc3ba9c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.971867427Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e54e979d-2bbc-4bcc-9c29-bcba17d32e40 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.973047858Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b4ccefec-8614-4567-b5bc-cd1936cec3b7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.973211698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.978176913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.97839409Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0528660d94e215c527446391647c5c1c25e1c0f3fd1d9a7114bd076da1749ee2/merged/etc/passwd: no such file or directory"
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.978491219Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0528660d94e215c527446391647c5c1c25e1c0f3fd1d9a7114bd076da1749ee2/merged/etc/group: no such file or directory"
	Nov 01 10:19:54 no-preload-680879 crio[566]: time="2025-11-01T10:19:54.978784001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:19:55 no-preload-680879 crio[566]: time="2025-11-01T10:19:55.008871766Z" level=info msg="Created container b3f88a77e7304ccb75255aa8f9a28ba16a587870acedd7ea1e77cab992e9b1c6: kube-system/storage-provisioner/storage-provisioner" id=b4ccefec-8614-4567-b5bc-cd1936cec3b7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:19:55 no-preload-680879 crio[566]: time="2025-11-01T10:19:55.009571828Z" level=info msg="Starting container: b3f88a77e7304ccb75255aa8f9a28ba16a587870acedd7ea1e77cab992e9b1c6" id=71d593c7-6b5e-490e-b5bd-2370960375c8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:19:55 no-preload-680879 crio[566]: time="2025-11-01T10:19:55.011761298Z" level=info msg="Started container" PID=1761 containerID=b3f88a77e7304ccb75255aa8f9a28ba16a587870acedd7ea1e77cab992e9b1c6 description=kube-system/storage-provisioner/storage-provisioner id=71d593c7-6b5e-490e-b5bd-2370960375c8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b0ffb81b149716527f4b8d1821ec520c280028139db3bbdf1268d418c65f14fc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b3f88a77e7304       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   b0ffb81b14971       storage-provisioner                          kube-system
	f828032dc0b17       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   8d77c5464343a       dashboard-metrics-scraper-6ffb444bf9-f9mph   kubernetes-dashboard
	10f3f9fd9deff       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   596feda1b06e1       kubernetes-dashboard-855c9754f9-6hkgl        kubernetes-dashboard
	e3ba237b72ca6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   fa097bd82f740       coredns-66bc5c9577-rh4z7                     kube-system
	e9822ca6642dd       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   85d0455b76cfa       busybox                                      default
	9a10b2e01aeb8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   35ebaebc66d0d       kube-proxy-ft2vw                             kube-system
	ccbb79c8e1a48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   b0ffb81b14971       storage-provisioner                          kube-system
	063de29478f6f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   c64d0241d78e4       kindnet-sjzlx                                kube-system
	6fe1794e14c17       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   7e15f8a0c5af9       kube-controller-manager-no-preload-680879    kube-system
	a1a084abd5f06       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   faf28a8999560       kube-apiserver-no-preload-680879             kube-system
	8a355ad3dea63       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   35b313908bb0d       etcd-no-preload-680879                       kube-system
	be916f84dfad9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   e944298f921ec       kube-scheduler-no-preload-680879             kube-system
	
	
	==> coredns [e3ba237b72ca6ee06f319e033870694f92cf60ca5f13ea437a84519543088d72] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34316 - 32450 "HINFO IN 4549008030702271427.4458488138960966621. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033793092s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-680879
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-680879
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=no-preload-680879
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_18_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:18:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-680879
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:20:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:19:53 +0000   Sat, 01 Nov 2025 10:18:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:19:53 +0000   Sat, 01 Nov 2025 10:18:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:19:53 +0000   Sat, 01 Nov 2025 10:18:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:19:53 +0000   Sat, 01 Nov 2025 10:19:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-680879
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                60389b87-92db-45cc-8d8b-f8362e2caec7
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-rh4z7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-680879                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-sjzlx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-680879              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-680879     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-ft2vw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-680879              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-f9mph    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6hkgl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node no-preload-680879 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node no-preload-680879 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node no-preload-680879 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node no-preload-680879 event: Registered Node no-preload-680879 in Controller
	  Normal  NodeReady                96s                kubelet          Node no-preload-680879 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node no-preload-680879 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node no-preload-680879 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node no-preload-680879 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node no-preload-680879 event: Registered Node no-preload-680879 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [8a355ad3dea63414c9311a3f417e38b58b4c399b8aa2b4497aea7e6cd9510af8] <==
	{"level":"warn","ts":"2025-11-01T10:19:22.513681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.524176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.531421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.538538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.546600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.554598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.562534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.570347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.578366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.591023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.598863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.615052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.621703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.630441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.637659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.645918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.653710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.661055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.669411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.677305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.684951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.705709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.712567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.720641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:19:22.771640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52114","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:20:19 up  3:02,  0 user,  load average: 2.87, 3.32, 2.72
	Linux no-preload-680879 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [063de29478f6f9a5582fb458f3bff8cab5c5ea9ba472292512dba0334c2bf18b] <==
	I1101 10:19:24.438340       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:19:24.438656       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:19:24.438827       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:19:24.438861       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:19:24.438886       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:19:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:19:24.690242       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:19:24.690273       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:19:24.690286       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:19:24.690718       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:19:25.190493       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:19:25.190526       1 metrics.go:72] Registering metrics
	I1101 10:19:25.190631       1 controller.go:711] "Syncing nftables rules"
	I1101 10:19:34.690950       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:19:34.691016       1 main.go:301] handling current node
	I1101 10:19:44.694028       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:19:44.694069       1 main.go:301] handling current node
	I1101 10:19:54.691064       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:19:54.691093       1 main.go:301] handling current node
	I1101 10:20:04.693257       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:20:04.693292       1 main.go:301] handling current node
	I1101 10:20:14.698976       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:20:14.699003       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a1a084abd5f06aa1899bd7372a8496c6c8eb79b98488279f9c9679a6c0338270] <==
	I1101 10:19:23.279563       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:19:23.279704       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:19:23.279774       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:19:23.279778       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:19:23.279791       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:19:23.281506       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:19:23.281642       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:19:23.282411       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:19:23.282571       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:19:23.284098       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:19:23.287957       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:19:23.298615       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:19:23.298657       1 policy_source.go:240] refreshing policies
	I1101 10:19:23.395881       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:19:23.551274       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:19:23.582807       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:19:23.605295       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:19:23.613467       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:19:23.621935       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:19:23.658712       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.171.158"}
	I1101 10:19:23.670310       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.23.24"}
	I1101 10:19:24.187395       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:19:26.993086       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:19:27.040863       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:19:27.091799       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6fe1794e14c177d264a3e5610bef578069b247e5deb7054c93fb9a70b2ccf7ba] <==
	I1101 10:19:26.602899       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:19:26.605159       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:19:26.607376       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:19:26.610586       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:19:26.613856       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:19:26.617098       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:19:26.618392       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:19:26.637538       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:19:26.637566       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:19:26.637596       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:19:26.637617       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:19:26.637641       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:19:26.637746       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:19:26.637823       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-680879"
	I1101 10:19:26.637865       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:19:26.637884       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:19:26.637962       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:19:26.638034       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:19:26.638465       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:19:26.643414       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:19:26.644590       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:19:26.644607       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:19:26.644613       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:19:26.647274       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:19:26.661570       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9a10b2e01aeb85081f2b04b5828d1dbf0e67fb066ec31ec791b84b4b18c9b593] <==
	I1101 10:19:24.244663       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:19:24.317164       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:19:24.418165       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:19:24.418203       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:19:24.418323       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:19:24.437526       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:19:24.437577       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:19:24.443490       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:19:24.443980       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:19:24.444010       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:19:24.445003       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:19:24.445028       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:19:24.445026       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:19:24.445042       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:19:24.445056       1 config.go:309] "Starting node config controller"
	I1101 10:19:24.445066       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:19:24.445174       1 config.go:200] "Starting service config controller"
	I1101 10:19:24.445257       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:19:24.545697       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:19:24.545729       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:19:24.545736       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:19:24.545712       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [be916f84dfad93d8e52891dd7a642ef5783afd3b0e1978d42fc11b92d8812a08] <==
	I1101 10:19:21.989598       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:19:23.242774       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:19:23.242806       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:19:23.250113       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:19:23.250178       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:19:23.250223       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:19:23.250233       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:19:23.250249       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:19:23.250257       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:19:23.251266       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:19:23.251347       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:19:23.350639       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:19:23.350654       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:19:23.350648       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:19:27 no-preload-680879 kubelet[713]: I1101 10:19:27.427758     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8f900f90-a9a1-4eed-850a-436ba6064cd9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-f9mph\" (UID: \"8f900f90-a9a1-4eed-850a-436ba6064cd9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph"
	Nov 01 10:19:27 no-preload-680879 kubelet[713]: I1101 10:19:27.427790     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5k9p\" (UniqueName: \"kubernetes.io/projected/f7ef4e23-14fd-41d1-a72b-4107d31b74a9-kube-api-access-h5k9p\") pod \"kubernetes-dashboard-855c9754f9-6hkgl\" (UID: \"f7ef4e23-14fd-41d1-a72b-4107d31b74a9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6hkgl"
	Nov 01 10:19:27 no-preload-680879 kubelet[713]: I1101 10:19:27.427852     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f7ef4e23-14fd-41d1-a72b-4107d31b74a9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-6hkgl\" (UID: \"f7ef4e23-14fd-41d1-a72b-4107d31b74a9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6hkgl"
	Nov 01 10:19:28 no-preload-680879 kubelet[713]: I1101 10:19:28.516047     713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:19:29 no-preload-680879 kubelet[713]: I1101 10:19:29.893334     713 scope.go:117] "RemoveContainer" containerID="6d0aa525c52aeb301c249bad65fa02461768e4a1ca506a75a5771f061d491074"
	Nov 01 10:19:30 no-preload-680879 kubelet[713]: I1101 10:19:30.899192     713 scope.go:117] "RemoveContainer" containerID="6d0aa525c52aeb301c249bad65fa02461768e4a1ca506a75a5771f061d491074"
	Nov 01 10:19:30 no-preload-680879 kubelet[713]: I1101 10:19:30.899407     713 scope.go:117] "RemoveContainer" containerID="bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd"
	Nov 01 10:19:30 no-preload-680879 kubelet[713]: E1101 10:19:30.899642     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f9mph_kubernetes-dashboard(8f900f90-a9a1-4eed-850a-436ba6064cd9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph" podUID="8f900f90-a9a1-4eed-850a-436ba6064cd9"
	Nov 01 10:19:31 no-preload-680879 kubelet[713]: I1101 10:19:31.904308     713 scope.go:117] "RemoveContainer" containerID="bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd"
	Nov 01 10:19:31 no-preload-680879 kubelet[713]: E1101 10:19:31.904521     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f9mph_kubernetes-dashboard(8f900f90-a9a1-4eed-850a-436ba6064cd9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph" podUID="8f900f90-a9a1-4eed-850a-436ba6064cd9"
	Nov 01 10:19:33 no-preload-680879 kubelet[713]: I1101 10:19:33.921560     713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6hkgl" podStartSLOduration=0.707343216 podStartE2EDuration="6.921539295s" podCreationTimestamp="2025-11-01 10:19:27 +0000 UTC" firstStartedPulling="2025-11-01 10:19:27.59178106 +0000 UTC m=+6.838688144" lastFinishedPulling="2025-11-01 10:19:33.805977138 +0000 UTC m=+13.052884223" observedRunningTime="2025-11-01 10:19:33.921195589 +0000 UTC m=+13.168102694" watchObservedRunningTime="2025-11-01 10:19:33.921539295 +0000 UTC m=+13.168446398"
	Nov 01 10:19:39 no-preload-680879 kubelet[713]: I1101 10:19:39.691408     713 scope.go:117] "RemoveContainer" containerID="bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd"
	Nov 01 10:19:39 no-preload-680879 kubelet[713]: E1101 10:19:39.691670     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f9mph_kubernetes-dashboard(8f900f90-a9a1-4eed-850a-436ba6064cd9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph" podUID="8f900f90-a9a1-4eed-850a-436ba6064cd9"
	Nov 01 10:19:50 no-preload-680879 kubelet[713]: I1101 10:19:50.842612     713 scope.go:117] "RemoveContainer" containerID="bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd"
	Nov 01 10:19:50 no-preload-680879 kubelet[713]: I1101 10:19:50.957091     713 scope.go:117] "RemoveContainer" containerID="bc6d7a0c7655f7501db8ed98fe145c27be72fe33527044ba206f7014f4ea6bcd"
	Nov 01 10:19:50 no-preload-680879 kubelet[713]: I1101 10:19:50.957332     713 scope.go:117] "RemoveContainer" containerID="f828032dc0b171472ab43cb16c9b5f1e248ee6710e23345aa7e3af0d4249a787"
	Nov 01 10:19:50 no-preload-680879 kubelet[713]: E1101 10:19:50.957540     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f9mph_kubernetes-dashboard(8f900f90-a9a1-4eed-850a-436ba6064cd9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph" podUID="8f900f90-a9a1-4eed-850a-436ba6064cd9"
	Nov 01 10:19:54 no-preload-680879 kubelet[713]: I1101 10:19:54.970297     713 scope.go:117] "RemoveContainer" containerID="ccbb79c8e1a4843e1b7bf4000208cf9402222b013115b8daf2351a7173d3e409"
	Nov 01 10:19:59 no-preload-680879 kubelet[713]: I1101 10:19:59.691058     713 scope.go:117] "RemoveContainer" containerID="f828032dc0b171472ab43cb16c9b5f1e248ee6710e23345aa7e3af0d4249a787"
	Nov 01 10:19:59 no-preload-680879 kubelet[713]: E1101 10:19:59.691252     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f9mph_kubernetes-dashboard(8f900f90-a9a1-4eed-850a-436ba6064cd9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f9mph" podUID="8f900f90-a9a1-4eed-850a-436ba6064cd9"
	Nov 01 10:20:12 no-preload-680879 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:20:12 no-preload-680879 kubelet[713]: I1101 10:20:12.408747     713 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 10:20:12 no-preload-680879 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:20:12 no-preload-680879 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:20:12 no-preload-680879 systemd[1]: kubelet.service: Consumed 1.680s CPU time.
	
	
	==> kubernetes-dashboard [10f3f9fd9deff3a9439579878660c06e4a23d5f18d25e273b1010876a5b9eb3d] <==
	2025/11/01 10:19:33 Using namespace: kubernetes-dashboard
	2025/11/01 10:19:33 Using in-cluster config to connect to apiserver
	2025/11/01 10:19:33 Using secret token for csrf signing
	2025/11/01 10:19:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:19:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:19:33 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:19:33 Generating JWE encryption key
	2025/11/01 10:19:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:19:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:19:33 Initializing JWE encryption key from synchronized object
	2025/11/01 10:19:33 Creating in-cluster Sidecar client
	2025/11/01 10:19:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:19:33 Serving insecurely on HTTP port: 9090
	2025/11/01 10:20:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:19:33 Starting overwatch
	
	
	==> storage-provisioner [b3f88a77e7304ccb75255aa8f9a28ba16a587870acedd7ea1e77cab992e9b1c6] <==
	I1101 10:19:55.023613       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:19:55.030351       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:19:55.030390       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:19:55.032638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:58.488298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:02.749168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:06.347241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:09.400997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:12.423058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:12.427635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:20:12.427828       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:20:12.427982       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6660dd7f-bed9-45cf-892b-1e6435b24faf", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-680879_47b07d80-62c7-4fbb-9af2-f3b0ccab4139 became leader
	I1101 10:20:12.428044       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-680879_47b07d80-62c7-4fbb-9af2-f3b0ccab4139!
	W1101 10:20:12.430608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:12.437918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:20:12.528284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-680879_47b07d80-62c7-4fbb-9af2-f3b0ccab4139!
	W1101 10:20:14.441558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:14.446552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:16.450303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:16.456424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:18.459480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:20:18.482813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ccbb79c8e1a4843e1b7bf4000208cf9402222b013115b8daf2351a7173d3e409] <==
	I1101 10:19:24.205055       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:19:54.209254       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-680879 -n no-preload-680879
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-680879 -n no-preload-680879: exit status 2 (410.726487ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-680879 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-006653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-006653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (282.290128ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:21:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-006653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-006653
helpers_test.go:243: (dbg) docker inspect newest-cni-006653:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64",
	        "Created": "2025-11-01T10:20:40.630212993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 769844,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:20:40.688032695Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64/hostname",
	        "HostsPath": "/var/lib/docker/containers/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64/hosts",
	        "LogPath": "/var/lib/docker/containers/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64-json.log",
	        "Name": "/newest-cni-006653",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-006653:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-006653",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64",
	                "LowerDir": "/var/lib/docker/overlay2/c10def8fe79d863bddcf542dfd2838cdfe2bb73d219aa8d27f9ddb8feb62b4da-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c10def8fe79d863bddcf542dfd2838cdfe2bb73d219aa8d27f9ddb8feb62b4da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c10def8fe79d863bddcf542dfd2838cdfe2bb73d219aa8d27f9ddb8feb62b4da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c10def8fe79d863bddcf542dfd2838cdfe2bb73d219aa8d27f9ddb8feb62b4da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-006653",
	                "Source": "/var/lib/docker/volumes/newest-cni-006653/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-006653",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-006653",
	                "name.minikube.sigs.k8s.io": "newest-cni-006653",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fce3f81f7861ca8db6d2770ed5f2bb578f21b4980b6804ba2ddd8694437be52e",
	            "SandboxKey": "/var/run/docker/netns/fce3f81f7861",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33204"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33207"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33205"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33206"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-006653": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:91:d9:4c:5f:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7c02c09c0ce161b2b9f0f4d8dfbab9af05a638642c6978f8142ed5d4368be572",
	                    "EndpointID": "d97057ee56ebeeea738c579c61e0d098cfba7c3643623cefa51239bbbf8b53f6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-006653",
	                        "91a32a4040ae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-006653 -n newest-cni-006653
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-006653 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-006653 logs -n 25: (1.064444428s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p force-systemd-flag-767379                                                                                                                                                                                                                  │ force-systemd-flag-767379    │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:17 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:17 UTC │ 01 Nov 25 10:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-556573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ stop    │ -p old-k8s-version-556573 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-680879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ stop    │ -p no-preload-680879 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-556573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p no-preload-680879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ old-k8s-version-556573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p old-k8s-version-556573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ no-preload-680879 image list --format=json                                                                                                                                                                                                    │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p no-preload-680879 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p embed-certs-678014 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-678014           │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p no-preload-680879                                                                                                                                                                                                                          │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p no-preload-680879                                                                                                                                                                                                                          │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p disable-driver-mounts-083568                                                                                                                                                                                                               │ disable-driver-mounts-083568 │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p default-k8s-diff-port-535119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ start   │ -p cert-expiration-577441 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-577441       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p cert-expiration-577441                                                                                                                                                                                                                     │ cert-expiration-577441       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p newest-cni-006653 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-006653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:20:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:20:34.895287  768708 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:20:34.895575  768708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:20:34.895586  768708 out.go:374] Setting ErrFile to fd 2...
	I1101 10:20:34.895590  768708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:20:34.895828  768708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:20:34.896379  768708 out.go:368] Setting JSON to false
	I1101 10:20:34.897965  768708 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10972,"bootTime":1761981463,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:20:34.898141  768708 start.go:143] virtualization: kvm guest
	I1101 10:20:34.899899  768708 out.go:179] * [newest-cni-006653] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:20:34.900939  768708 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:20:34.900960  768708 notify.go:221] Checking for updates...
	I1101 10:20:34.902788  768708 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:20:34.903801  768708 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:20:34.904822  768708 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:20:34.905868  768708 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:20:34.906945  768708 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:20:34.908415  768708 config.go:182] Loaded profile config "default-k8s-diff-port-535119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:20:34.908521  768708 config.go:182] Loaded profile config "embed-certs-678014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:20:34.908612  768708 config.go:182] Loaded profile config "kubernetes-upgrade-949166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:20:34.908740  768708 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:20:34.932460  768708 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:20:34.932585  768708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:20:34.998259  768708 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:20:34.986591883 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:20:34.998421  768708 docker.go:319] overlay module found
	I1101 10:20:34.999823  768708 out.go:179] * Using the docker driver based on user configuration
	I1101 10:20:35.000886  768708 start.go:309] selected driver: docker
	I1101 10:20:35.000901  768708 start.go:930] validating driver "docker" against <nil>
	I1101 10:20:35.000913  768708 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:20:35.001515  768708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:20:35.068028  768708 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:20:35.056711573 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:20:35.068277  768708 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 10:20:35.068316  768708 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 10:20:35.068572  768708 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:20:35.070280  768708 out.go:179] * Using Docker driver with root privileges
	I1101 10:20:35.071184  768708 cni.go:84] Creating CNI manager for ""
	I1101 10:20:35.071259  768708 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:20:35.071270  768708 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:20:35.071337  768708 start.go:353] cluster config:
	{Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:20:35.072413  768708 out.go:179] * Starting "newest-cni-006653" primary control-plane node in "newest-cni-006653" cluster
	I1101 10:20:35.073373  768708 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:20:35.074324  768708 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:20:35.075195  768708 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:20:35.075240  768708 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:20:35.075272  768708 cache.go:59] Caching tarball of preloaded images
	I1101 10:20:35.075308  768708 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:20:35.075454  768708 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:20:35.075471  768708 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:20:35.075600  768708 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/config.json ...
	I1101 10:20:35.075626  768708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/config.json: {Name:mkc9892d30a52dad6dd0fe91a925f3b047065463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:35.096730  768708 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:20:35.096759  768708 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:20:35.096781  768708 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:20:35.096822  768708 start.go:360] acquireMachinesLock for newest-cni-006653: {Name:mkf496d0b80c7855406646357bd774886a0856a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:20:35.096965  768708 start.go:364] duration metric: took 106.002µs to acquireMachinesLock for "newest-cni-006653"
	I1101 10:20:35.097002  768708 start.go:93] Provisioning new machine with config: &{Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:20:35.097107  768708 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:20:35.190766  760328 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:20:35.190890  760328 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:20:35.191006  760328 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:20:35.191102  760328 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 10:20:35.191156  760328 kubeadm.go:319] OS: Linux
	I1101 10:20:35.191220  760328 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:20:35.191285  760328 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:20:35.191358  760328 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:20:35.191420  760328 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:20:35.191512  760328 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:20:35.191607  760328 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:20:35.191678  760328 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:20:35.191742  760328 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 10:20:35.191904  760328 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:20:35.192051  760328 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:20:35.192168  760328 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:20:35.192287  760328 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:20:35.193695  760328 out.go:252]   - Generating certificates and keys ...
	I1101 10:20:35.193806  760328 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:20:35.193954  760328 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:20:35.194052  760328 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:20:35.194130  760328 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:20:35.194210  760328 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:20:35.194285  760328 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:20:35.194369  760328 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:20:35.194507  760328 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-678014 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 10:20:35.194579  760328 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:20:35.194744  760328 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-678014 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 10:20:35.194956  760328 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:20:35.195042  760328 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:20:35.195109  760328 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:20:35.195185  760328 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:20:35.195282  760328 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:20:35.195369  760328 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:20:35.195442  760328 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:20:35.195542  760328 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:20:35.195616  760328 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:20:35.195723  760328 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:20:35.195813  760328 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:20:35.197122  760328 out.go:252]   - Booting up control plane ...
	I1101 10:20:35.197233  760328 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:20:35.197351  760328 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:20:35.197447  760328 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:20:35.197597  760328 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:20:35.197748  760328 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:20:35.198078  760328 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:20:35.198211  760328 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:20:35.198268  760328 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:20:35.198461  760328 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:20:35.198662  760328 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:20:35.198749  760328 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001911028s
	I1101 10:20:35.198929  760328 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:20:35.199088  760328 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1101 10:20:35.199262  760328 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:20:35.199398  760328 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:20:35.199514  760328 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.245204428s
	I1101 10:20:35.199622  760328 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.958801967s
	I1101 10:20:35.199725  760328 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501766267s
	I1101 10:20:35.199867  760328 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:20:35.200053  760328 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:20:35.200151  760328 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:20:35.200447  760328 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-678014 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:20:35.200517  760328 kubeadm.go:319] [bootstrap-token] Using token: 68q9s0.g56a0l6h86z5uhzk
	I1101 10:20:35.201625  760328 out.go:252]   - Configuring RBAC rules ...
	I1101 10:20:35.201771  760328 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:20:35.201954  760328 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:20:35.202137  760328 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:20:35.202312  760328 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:20:35.202463  760328 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:20:35.202587  760328 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:20:35.202752  760328 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:20:35.202819  760328 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:20:35.202910  760328 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:20:35.202933  760328 kubeadm.go:319] 
	I1101 10:20:35.203016  760328 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:20:35.203031  760328 kubeadm.go:319] 
	I1101 10:20:35.203145  760328 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:20:35.203156  760328 kubeadm.go:319] 
	I1101 10:20:35.203188  760328 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:20:35.203264  760328 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:20:35.203330  760328 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:20:35.203340  760328 kubeadm.go:319] 
	I1101 10:20:35.203419  760328 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:20:35.203428  760328 kubeadm.go:319] 
	I1101 10:20:35.203508  760328 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:20:35.203518  760328 kubeadm.go:319] 
	I1101 10:20:35.203584  760328 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:20:35.203695  760328 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:20:35.203805  760328 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:20:35.203821  760328 kubeadm.go:319] 
	I1101 10:20:35.203939  760328 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:20:35.204072  760328 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:20:35.204090  760328 kubeadm.go:319] 
	I1101 10:20:35.204206  760328 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 68q9s0.g56a0l6h86z5uhzk \
	I1101 10:20:35.204343  760328 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 \
	I1101 10:20:35.204381  760328 kubeadm.go:319] 	--control-plane 
	I1101 10:20:35.204389  760328 kubeadm.go:319] 
	I1101 10:20:35.204500  760328 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:20:35.204519  760328 kubeadm.go:319] 
	I1101 10:20:35.204630  760328 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 68q9s0.g56a0l6h86z5uhzk \
	I1101 10:20:35.204794  760328 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 
	I1101 10:20:35.204808  760328 cni.go:84] Creating CNI manager for ""
	I1101 10:20:35.204817  760328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:20:35.206711  760328 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:20:33.612083  764436 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:20:34.041023  764436 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:20:34.555420  764436 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:20:34.838861  764436 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:20:35.024236  764436 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:20:35.024475  764436 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-535119 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:20:35.654772  764436 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:20:35.655066  764436 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-535119 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:20:35.969152  764436 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:20:36.301415  764436 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:20:36.358606  764436 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:20:36.358689  764436 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:20:36.539699  764436 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:20:36.980078  764436 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:20:37.259097  764436 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:20:37.655171  764436 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:20:37.993301  764436 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:20:37.994002  764436 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:20:38.003185  764436 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:20:38.008244  764436 out.go:252]   - Booting up control plane ...
	I1101 10:20:38.008403  764436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:20:38.008541  764436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:20:38.008602  764436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:20:38.020656  764436 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:20:38.020780  764436 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:20:38.027709  764436 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:20:38.027894  764436 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:20:38.027959  764436 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:20:38.135865  764436 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:20:38.136038  764436 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:20:35.207746  760328 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:20:35.213523  760328 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:20:35.213550  760328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:20:35.230638  760328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:20:35.514640  760328 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:20:35.514826  760328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:35.514983  760328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-678014 minikube.k8s.io/updated_at=2025_11_01T10_20_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=embed-certs-678014 minikube.k8s.io/primary=true
	I1101 10:20:35.530056  760328 ops.go:34] apiserver oom_adj: -16
	I1101 10:20:35.641502  760328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:36.141694  760328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:36.641662  760328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:37.142396  760328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:37.641952  760328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:38.141625  760328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:35.183112  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:20:35.183193  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:20:35.183271  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:20:35.218639  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:20:35.218666  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:20:35.218672  734517 cri.go:89] found id: ""
	I1101 10:20:35.218682  734517 logs.go:282] 2 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:20:35.218732  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:35.223621  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:35.228702  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:20:35.228770  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:20:35.261423  734517 cri.go:89] found id: ""
	I1101 10:20:35.261455  734517 logs.go:282] 0 containers: []
	W1101 10:20:35.261465  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:20:35.261473  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:20:35.261537  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:20:35.299012  734517 cri.go:89] found id: ""
	I1101 10:20:35.299044  734517 logs.go:282] 0 containers: []
	W1101 10:20:35.299055  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:20:35.299063  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:20:35.299130  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:20:35.336727  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:35.336756  734517 cri.go:89] found id: ""
	I1101 10:20:35.336767  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:20:35.336853  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:35.343260  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:20:35.343346  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:20:35.377221  734517 cri.go:89] found id: ""
	I1101 10:20:35.377249  734517 logs.go:282] 0 containers: []
	W1101 10:20:35.377260  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:20:35.377268  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:20:35.377331  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:20:35.417216  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:35.417242  734517 cri.go:89] found id: ""
	I1101 10:20:35.417252  734517 logs.go:282] 1 containers: [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718]
	I1101 10:20:35.417316  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:35.421856  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:20:35.421928  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:20:35.460462  734517 cri.go:89] found id: ""
	I1101 10:20:35.460496  734517 logs.go:282] 0 containers: []
	W1101 10:20:35.460508  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:20:35.460516  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:20:35.460581  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:20:35.510125  734517 cri.go:89] found id: ""
	I1101 10:20:35.510156  734517 logs.go:282] 0 containers: []
	W1101 10:20:35.510167  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:20:35.510187  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:20:35.510203  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:20:35.556088  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:20:35.556131  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:35.639445  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:20:35.639494  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:35.678873  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:20:35.678958  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:20:35.755655  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:20:35.755696  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:20:35.862261  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:20:35.862311  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:20:35.888160  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:20:35.888201  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1101 10:20:35.098805  768708 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:20:35.099034  768708 start.go:159] libmachine.API.Create for "newest-cni-006653" (driver="docker")
	I1101 10:20:35.099064  768708 client.go:173] LocalClient.Create starting
	I1101 10:20:35.099146  768708 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem
	I1101 10:20:35.099177  768708 main.go:143] libmachine: Decoding PEM data...
	I1101 10:20:35.099195  768708 main.go:143] libmachine: Parsing certificate...
	I1101 10:20:35.099258  768708 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem
	I1101 10:20:35.099277  768708 main.go:143] libmachine: Decoding PEM data...
	I1101 10:20:35.099292  768708 main.go:143] libmachine: Parsing certificate...
	I1101 10:20:35.099601  768708 cli_runner.go:164] Run: docker network inspect newest-cni-006653 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:20:35.116928  768708 cli_runner.go:211] docker network inspect newest-cni-006653 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:20:35.117012  768708 network_create.go:284] running [docker network inspect newest-cni-006653] to gather additional debugging logs...
	I1101 10:20:35.117035  768708 cli_runner.go:164] Run: docker network inspect newest-cni-006653
	W1101 10:20:35.135903  768708 cli_runner.go:211] docker network inspect newest-cni-006653 returned with exit code 1
	I1101 10:20:35.135940  768708 network_create.go:287] error running [docker network inspect newest-cni-006653]: docker network inspect newest-cni-006653: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-006653 not found
	I1101 10:20:35.135980  768708 network_create.go:289] output of [docker network inspect newest-cni-006653]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-006653 not found
	
	** /stderr **
	I1101 10:20:35.136144  768708 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:20:35.156598  768708 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-db3052bfa0e7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:6a:af:78:80:46} reservation:<nil>}
	I1101 10:20:35.157328  768708 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-99d2741e1e59 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:99:ce:91:38:1c} reservation:<nil>}
	I1101 10:20:35.158094  768708 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a696a61d1319 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:f0:66:2c:aa:f2} reservation:<nil>}
	I1101 10:20:35.158966  768708 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb8e50}
	I1101 10:20:35.158992  768708 network_create.go:124] attempt to create docker network newest-cni-006653 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 10:20:35.159042  768708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-006653 newest-cni-006653
	I1101 10:20:35.227968  768708 network_create.go:108] docker network newest-cni-006653 192.168.76.0/24 created
	I1101 10:20:35.228005  768708 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-006653" container
	I1101 10:20:35.228088  768708 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:20:35.252174  768708 cli_runner.go:164] Run: docker volume create newest-cni-006653 --label name.minikube.sigs.k8s.io=newest-cni-006653 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:20:35.275336  768708 oci.go:103] Successfully created a docker volume newest-cni-006653
	I1101 10:20:35.275441  768708 cli_runner.go:164] Run: docker run --rm --name newest-cni-006653-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-006653 --entrypoint /usr/bin/test -v newest-cni-006653:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:20:35.772384  768708 oci.go:107] Successfully prepared a docker volume newest-cni-006653
	I1101 10:20:35.772570  768708 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:20:35.772603  768708 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:20:35.772685  768708 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-006653:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 10:20:38.642265  760328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:39.141616  760328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:39.642496  760328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:40.142136  760328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:40.439032  760328 kubeadm.go:1114] duration metric: took 4.92425274s to wait for elevateKubeSystemPrivileges
	I1101 10:20:40.439075  760328 kubeadm.go:403] duration metric: took 15.057352758s to StartCluster
	I1101 10:20:40.439100  760328 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:40.439170  760328 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:20:40.441536  760328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:40.441981  760328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:20:40.442706  760328 config.go:182] Loaded profile config "embed-certs-678014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:20:40.443041  760328 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:20:40.443096  760328 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:20:40.444258  760328 addons.go:70] Setting default-storageclass=true in profile "embed-certs-678014"
	I1101 10:20:40.444286  760328 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-678014"
	I1101 10:20:40.444717  760328 cli_runner.go:164] Run: docker container inspect embed-certs-678014 --format={{.State.Status}}
	I1101 10:20:40.444789  760328 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-678014"
	I1101 10:20:40.444811  760328 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-678014"
	I1101 10:20:40.444856  760328 host.go:66] Checking if "embed-certs-678014" exists ...
	I1101 10:20:40.445069  760328 out.go:179] * Verifying Kubernetes components...
	I1101 10:20:40.445360  760328 cli_runner.go:164] Run: docker container inspect embed-certs-678014 --format={{.State.Status}}
	I1101 10:20:40.450192  760328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:20:40.491089  760328 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:20:40.491135  760328 addons.go:239] Setting addon default-storageclass=true in "embed-certs-678014"
	I1101 10:20:40.491197  760328 host.go:66] Checking if "embed-certs-678014" exists ...
	I1101 10:20:40.491744  760328 cli_runner.go:164] Run: docker container inspect embed-certs-678014 --format={{.State.Status}}
	I1101 10:20:40.492775  760328 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:20:40.492797  760328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:20:40.492910  760328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-678014
	I1101 10:20:40.533223  760328 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:20:40.533253  760328 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:20:40.533342  760328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-678014
	I1101 10:20:40.562328  760328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/embed-certs-678014/id_rsa Username:docker}
	I1101 10:20:40.582112  760328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/embed-certs-678014/id_rsa Username:docker}
	I1101 10:20:40.636401  760328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:20:40.672312  760328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:20:40.720069  760328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:20:40.754251  760328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:20:40.965592  760328 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1101 10:20:40.971236  760328 node_ready.go:35] waiting up to 6m0s for node "embed-certs-678014" to be "Ready" ...
	I1101 10:20:41.218259  760328 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:20:39.137093  764436 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001425251s
	I1101 10:20:39.140170  764436 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:20:39.140275  764436 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1101 10:20:39.140421  764436 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:20:39.140502  764436 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:20:42.826271  764436 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.684086751s
	I1101 10:20:42.884268  764436 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.744066964s
	I1101 10:20:41.219152  760328 addons.go:515] duration metric: took 776.055213ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:20:41.479096  760328 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-678014" context rescaled to 1 replicas
	W1101 10:20:42.974643  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	I1101 10:20:40.462394  768708 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-006653:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.689634995s)
	I1101 10:20:40.462438  768708 kic.go:203] duration metric: took 4.689830487s to extract preloaded images to volume ...
	W1101 10:20:40.462572  768708 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 10:20:40.462615  768708 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 10:20:40.462666  768708 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:20:40.604487  768708 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-006653 --name newest-cni-006653 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-006653 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-006653 --network newest-cni-006653 --ip 192.168.76.2 --volume newest-cni-006653:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:20:41.053713  768708 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Running}}
	I1101 10:20:41.080749  768708 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:20:41.104908  768708 cli_runner.go:164] Run: docker exec newest-cni-006653 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:20:41.164142  768708 oci.go:144] the created container "newest-cni-006653" has a running status.
	I1101 10:20:41.164192  768708 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa...
	I1101 10:20:41.307557  768708 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:20:41.353517  768708 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:20:41.374765  768708 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:20:41.374792  768708 kic_runner.go:114] Args: [docker exec --privileged newest-cni-006653 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:20:41.432192  768708 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:20:41.460600  768708 machine.go:94] provisionDockerMachine start ...
	I1101 10:20:41.461890  768708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:20:41.489779  768708 main.go:143] libmachine: Using SSH client type: native
	I1101 10:20:41.490142  768708 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1101 10:20:41.490160  768708 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:20:41.658302  768708 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-006653
	
	I1101 10:20:41.658336  768708 ubuntu.go:182] provisioning hostname "newest-cni-006653"
	I1101 10:20:41.658407  768708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:20:41.680314  768708 main.go:143] libmachine: Using SSH client type: native
	I1101 10:20:41.680557  768708 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1101 10:20:41.680576  768708 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-006653 && echo "newest-cni-006653" | sudo tee /etc/hostname
	I1101 10:20:41.853254  768708 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-006653
	
	I1101 10:20:41.853352  768708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:20:41.876642  768708 main.go:143] libmachine: Using SSH client type: native
	I1101 10:20:41.876968  768708 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1101 10:20:41.876998  768708 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-006653' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-006653/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-006653' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:20:42.034871  768708 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:20:42.034922  768708 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:20:42.034960  768708 ubuntu.go:190] setting up certificates
	I1101 10:20:42.034978  768708 provision.go:84] configureAuth start
	I1101 10:20:42.035053  768708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-006653
	I1101 10:20:42.059014  768708 provision.go:143] copyHostCerts
	I1101 10:20:42.059194  768708 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:20:42.059241  768708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:20:42.059370  768708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:20:42.059523  768708 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:20:42.059533  768708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:20:42.059573  768708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:20:42.059662  768708 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:20:42.059670  768708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:20:42.059711  768708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:20:42.060587  768708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.newest-cni-006653 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-006653]
	I1101 10:20:42.253514  768708 provision.go:177] copyRemoteCerts
	I1101 10:20:42.253597  768708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:20:42.253645  768708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:20:42.277872  768708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:20:42.388979  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:20:42.410682  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:20:42.432232  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:20:42.454614  768708 provision.go:87] duration metric: took 419.6194ms to configureAuth
	I1101 10:20:42.454655  768708 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:20:42.454894  768708 config.go:182] Loaded profile config "newest-cni-006653": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:20:42.455007  768708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:20:42.479737  768708 main.go:143] libmachine: Using SSH client type: native
	I1101 10:20:42.480063  768708 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1101 10:20:42.480087  768708 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:20:42.820728  768708 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:20:42.820761  768708 machine.go:97] duration metric: took 1.36005076s to provisionDockerMachine
	I1101 10:20:42.820774  768708 client.go:176] duration metric: took 7.721701374s to LocalClient.Create
	I1101 10:20:42.820792  768708 start.go:167] duration metric: took 7.721757082s to libmachine.API.Create "newest-cni-006653"
	I1101 10:20:42.820802  768708 start.go:293] postStartSetup for "newest-cni-006653" (driver="docker")
	I1101 10:20:42.820814  768708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:20:42.820899  768708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:20:42.820945  768708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:20:42.863252  768708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:20:42.978762  768708 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:20:42.983048  768708 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:20:42.983088  768708 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:20:42.983105  768708 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:20:42.983203  768708 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:20:42.983348  768708 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:20:42.983503  768708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:20:42.995394  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:20:43.026080  768708 start.go:296] duration metric: took 205.25916ms for postStartSetup
	I1101 10:20:43.026633  768708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-006653
	I1101 10:20:43.050625  768708 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/config.json ...
	I1101 10:20:43.051017  768708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:20:43.051102  768708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:20:43.072344  768708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:20:43.172550  768708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:20:43.177947  768708 start.go:128] duration metric: took 8.080819012s to createHost
	I1101 10:20:43.177981  768708 start.go:83] releasing machines lock for "newest-cni-006653", held for 8.080996393s
	I1101 10:20:43.178062  768708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-006653
	I1101 10:20:43.196929  768708 ssh_runner.go:195] Run: cat /version.json
	I1101 10:20:43.197047  768708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:20:43.197087  768708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:20:43.197140  768708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:20:43.218144  768708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:20:43.219026  768708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:20:43.372828  768708 ssh_runner.go:195] Run: systemctl --version
	I1101 10:20:43.380140  768708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:20:43.418693  768708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:20:43.424270  768708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:20:43.424342  768708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:20:43.452147  768708 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:20:43.452170  768708 start.go:496] detecting cgroup driver to use...
	I1101 10:20:43.452204  768708 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:20:43.452254  768708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:20:43.469941  768708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:20:43.483959  768708 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:20:43.484032  768708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:20:43.502633  768708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:20:43.521760  768708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:20:43.610784  768708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:20:43.706128  768708 docker.go:234] disabling docker service ...
	I1101 10:20:43.706198  768708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:20:43.727011  768708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:20:43.742030  768708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:20:43.849771  768708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:20:43.954428  768708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:20:43.970736  768708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:20:43.990120  768708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:20:43.990196  768708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:20:44.003687  768708 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:20:44.003771  768708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:20:44.016048  768708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:20:44.028645  768708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:20:44.040473  768708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:20:44.051667  768708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:20:44.064134  768708 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:20:44.081356  768708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:20:44.094019  768708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:20:44.104048  768708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:20:44.113827  768708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:20:44.216888  768708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:20:44.349560  768708 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:20:44.349635  768708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:20:44.355261  768708 start.go:564] Will wait 60s for crictl version
	I1101 10:20:44.355336  768708 ssh_runner.go:195] Run: which crictl
	I1101 10:20:44.359704  768708 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:20:44.389892  768708 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:20:44.389985  768708 ssh_runner.go:195] Run: crio --version
	I1101 10:20:44.425795  768708 ssh_runner.go:195] Run: crio --version
	I1101 10:20:44.460684  768708 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:20:44.461908  768708 cli_runner.go:164] Run: docker network inspect newest-cni-006653 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:20:44.479959  768708 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:20:44.484790  768708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:20:44.499385  768708 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:20:44.641522  764436 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501386312s
	I1101 10:20:44.653643  764436 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:20:44.666778  764436 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:20:44.677706  764436 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:20:44.678063  764436 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-535119 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:20:44.687920  764436 kubeadm.go:319] [bootstrap-token] Using token: wdofp5.07719xrrns43cjn0
	I1101 10:20:44.500379  768708 kubeadm.go:884] updating cluster {Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:20:44.500538  768708 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:20:44.500631  768708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:20:44.536655  768708 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:20:44.536681  768708 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:20:44.536745  768708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:20:44.564410  768708 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:20:44.564437  768708 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:20:44.564446  768708 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:20:44.564568  768708 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-006653 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:20:44.564684  768708 ssh_runner.go:195] Run: crio config
	I1101 10:20:44.614478  768708 cni.go:84] Creating CNI manager for ""
	I1101 10:20:44.614512  768708 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:20:44.614539  768708 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:20:44.614577  768708 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-006653 NodeName:newest-cni-006653 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:20:44.614776  768708 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-006653"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:20:44.614873  768708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:20:44.623846  768708 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:20:44.623918  768708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:20:44.632189  768708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:20:44.646653  768708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:20:44.664678  768708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1101 10:20:44.681233  768708 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:20:44.686018  768708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:20:44.699487  768708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:20:44.784318  768708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:20:44.811662  768708 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653 for IP: 192.168.76.2
	I1101 10:20:44.811691  768708 certs.go:195] generating shared ca certs ...
	I1101 10:20:44.811715  768708 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:44.811921  768708 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:20:44.811967  768708 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:20:44.811977  768708 certs.go:257] generating profile certs ...
	I1101 10:20:44.812039  768708 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/client.key
	I1101 10:20:44.812053  768708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/client.crt with IP's: []
	I1101 10:20:44.689214  764436 out.go:252]   - Configuring RBAC rules ...
	I1101 10:20:44.689384  764436 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:20:44.694630  764436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:20:44.701339  764436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:20:44.704414  764436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:20:44.708267  764436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:20:44.711279  764436 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:20:45.049423  764436 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:20:45.466484  764436 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:20:46.047817  764436 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:20:46.049071  764436 kubeadm.go:319] 
	I1101 10:20:46.049163  764436 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:20:46.049177  764436 kubeadm.go:319] 
	I1101 10:20:46.049290  764436 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:20:46.049316  764436 kubeadm.go:319] 
	I1101 10:20:46.049373  764436 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:20:46.049468  764436 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:20:46.049547  764436 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:20:46.049557  764436 kubeadm.go:319] 
	I1101 10:20:46.049630  764436 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:20:46.049640  764436 kubeadm.go:319] 
	I1101 10:20:46.049717  764436 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:20:46.049725  764436 kubeadm.go:319] 
	I1101 10:20:46.049803  764436 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:20:46.049939  764436 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:20:46.050026  764436 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:20:46.050043  764436 kubeadm.go:319] 
	I1101 10:20:46.050149  764436 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:20:46.050245  764436 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:20:46.050251  764436 kubeadm.go:319] 
	I1101 10:20:46.050360  764436 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token wdofp5.07719xrrns43cjn0 \
	I1101 10:20:46.050633  764436 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 \
	I1101 10:20:46.050673  764436 kubeadm.go:319] 	--control-plane 
	I1101 10:20:46.050702  764436 kubeadm.go:319] 
	I1101 10:20:46.050826  764436 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:20:46.050866  764436 kubeadm.go:319] 
	I1101 10:20:46.051022  764436 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token wdofp5.07719xrrns43cjn0 \
	I1101 10:20:46.051183  764436 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 
	I1101 10:20:46.054276  764436 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 10:20:46.054385  764436 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:20:46.054407  764436 cni.go:84] Creating CNI manager for ""
	I1101 10:20:46.054417  764436 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:20:46.056760  764436 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:20:46.057665  764436 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:20:46.062893  764436 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:20:46.062914  764436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:20:46.078802  764436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:20:46.311940  764436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:20:46.312031  764436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:46.312038  764436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-535119 minikube.k8s.io/updated_at=2025_11_01T10_20_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=default-k8s-diff-port-535119 minikube.k8s.io/primary=true
	I1101 10:20:46.406032  764436 ops.go:34] apiserver oom_adj: -16
	I1101 10:20:46.406241  764436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:46.906469  764436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:47.407280  764436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:47.906610  764436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1101 10:20:44.974787  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	W1101 10:20:46.975575  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	I1101 10:20:45.965488  734517 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.0772598s)
	W1101 10:20:45.965531  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1101 10:20:45.965546  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:20:45.965565  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:20:46.004984  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:20:46.005028  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:20:48.545137  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:20:44.961801  768708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/client.crt ...
	I1101 10:20:44.961841  768708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/client.crt: {Name:mk157a16b31c6f8afa72d4411bec086cc817f19a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:44.962040  768708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/client.key ...
	I1101 10:20:44.962052  768708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/client.key: {Name:mk7ba6e3cb69f96453056bfb421e94dbeb6aeab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:44.962133  768708 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.key.c43daf58
	I1101 10:20:44.962148  768708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.crt.c43daf58 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 10:20:45.254949  768708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.crt.c43daf58 ...
	I1101 10:20:45.254993  768708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.crt.c43daf58: {Name:mk029f3fa6b561e6122c12d67bf61df76e9d8a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:45.255232  768708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.key.c43daf58 ...
	I1101 10:20:45.255261  768708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.key.c43daf58: {Name:mkf06314f06208ab41cc935a83b5206ff8e40cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:45.255396  768708 certs.go:382] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.crt.c43daf58 -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.crt
	I1101 10:20:45.255508  768708 certs.go:386] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.key.c43daf58 -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.key
	I1101 10:20:45.255593  768708 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.key
	I1101 10:20:45.255619  768708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.crt with IP's: []
	I1101 10:20:45.447731  768708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.crt ...
	I1101 10:20:45.447769  768708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.crt: {Name:mkba4d2cce72ac30409fafb713d5c8f7417960f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:45.448004  768708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.key ...
	I1101 10:20:45.448029  768708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.key: {Name:mkf229b16485a6f9dcd00179b08f788084f0054e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:45.448323  768708 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:20:45.448378  768708 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:20:45.448392  768708 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:20:45.448424  768708 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:20:45.448454  768708 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:20:45.448481  768708 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:20:45.448537  768708 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:20:45.449287  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:20:45.472192  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:20:45.493461  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:20:45.513222  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:20:45.532824  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:20:45.552997  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:20:45.572482  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:20:45.591572  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:20:45.611559  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:20:45.633133  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:20:45.653980  768708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:20:45.673357  768708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:20:45.687216  768708 ssh_runner.go:195] Run: openssl version
	I1101 10:20:45.693814  768708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:20:45.703808  768708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:20:45.708273  768708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:20:45.708360  768708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:20:45.748090  768708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:20:45.759317  768708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:20:45.768911  768708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:20:45.773764  768708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:20:45.773853  768708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:20:45.811079  768708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:20:45.820811  768708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:20:45.830676  768708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:20:45.835517  768708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:20:45.835587  768708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:20:45.872197  768708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:20:45.882495  768708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:20:45.886430  768708 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:20:45.886488  768708 kubeadm.go:401] StartCluster: {Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:20:45.886587  768708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:20:45.886643  768708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:20:45.917097  768708 cri.go:89] found id: ""
	I1101 10:20:45.917169  768708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:20:45.925947  768708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:20:45.934544  768708 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:20:45.934632  768708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:20:45.943074  768708 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:20:45.943099  768708 kubeadm.go:158] found existing configuration files:
	
	I1101 10:20:45.943152  768708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:20:45.952751  768708 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:20:45.952807  768708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:20:45.961358  768708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:20:45.970124  768708 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:20:45.970205  768708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:20:45.979389  768708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:20:45.988192  768708 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:20:45.988251  768708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:20:45.997987  768708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:20:46.008399  768708 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:20:46.008465  768708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:20:46.017670  768708 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:20:46.089896  768708 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 10:20:46.162471  768708 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:20:48.406691  764436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:48.907271  764436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:49.406779  764436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:49.906492  764436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:50.407026  764436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:50.907053  764436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:51.004624  764436 kubeadm.go:1114] duration metric: took 4.692658496s to wait for elevateKubeSystemPrivileges
	I1101 10:20:51.004684  764436 kubeadm.go:403] duration metric: took 18.091906339s to StartCluster
	I1101 10:20:51.004716  764436 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:51.004802  764436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:20:51.007140  764436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:20:51.007422  764436 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:20:51.007435  764436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:20:51.007481  764436 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:20:51.007585  764436 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-535119"
	I1101 10:20:51.007625  764436 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-535119"
	I1101 10:20:51.007683  764436 config.go:182] Loaded profile config "default-k8s-diff-port-535119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:20:51.007698  764436 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-535119"
	I1101 10:20:51.007637  764436 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-535119"
	I1101 10:20:51.008138  764436 host.go:66] Checking if "default-k8s-diff-port-535119" exists ...
	I1101 10:20:51.008482  764436 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-535119 --format={{.State.Status}}
	I1101 10:20:51.008579  764436 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-535119 --format={{.State.Status}}
	I1101 10:20:51.009299  764436 out.go:179] * Verifying Kubernetes components...
	I1101 10:20:51.010651  764436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:20:51.040737  764436 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-535119"
	I1101 10:20:51.040788  764436 host.go:66] Checking if "default-k8s-diff-port-535119" exists ...
	I1101 10:20:51.041380  764436 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-535119 --format={{.State.Status}}
	I1101 10:20:51.046191  764436 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:20:51.048485  764436 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:20:51.048522  764436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:20:51.048594  764436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-535119
	I1101 10:20:51.068247  764436 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:20:51.068278  764436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:20:51.069367  764436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-535119
	I1101 10:20:51.080713  764436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/default-k8s-diff-port-535119/id_rsa Username:docker}
	I1101 10:20:51.104274  764436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/default-k8s-diff-port-535119/id_rsa Username:docker}
	I1101 10:20:51.153739  764436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:20:51.217386  764436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:20:51.263178  764436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:20:51.266761  764436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:20:51.405025  764436 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 10:20:51.407652  764436 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-535119" to be "Ready" ...
	I1101 10:20:51.664958  764436 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:20:51.666145  764436 addons.go:515] duration metric: took 658.674207ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:20:51.911928  764436 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-535119" context rescaled to 1 replicas
	W1101 10:20:49.474757  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	W1101 10:20:51.476877  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	I1101 10:20:50.333770  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:59504->192.168.103.2:8443: read: connection reset by peer
	I1101 10:20:50.333870  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:20:50.333936  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:20:50.366501  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:20:50.366531  734517 cri.go:89] found id: "d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	I1101 10:20:50.366537  734517 cri.go:89] found id: ""
	I1101 10:20:50.366548  734517 logs.go:282] 2 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]
	I1101 10:20:50.366612  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:50.370994  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:50.374814  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:20:50.374903  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:20:50.406220  734517 cri.go:89] found id: ""
	I1101 10:20:50.406249  734517 logs.go:282] 0 containers: []
	W1101 10:20:50.406261  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:20:50.406269  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:20:50.406327  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:20:50.438257  734517 cri.go:89] found id: ""
	I1101 10:20:50.438293  734517 logs.go:282] 0 containers: []
	W1101 10:20:50.438304  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:20:50.438312  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:20:50.438375  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:20:50.475408  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:50.475440  734517 cri.go:89] found id: ""
	I1101 10:20:50.475451  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:20:50.475517  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:50.480935  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:20:50.481030  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:20:50.515163  734517 cri.go:89] found id: ""
	I1101 10:20:50.515194  734517 logs.go:282] 0 containers: []
	W1101 10:20:50.515206  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:20:50.515223  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:20:50.515285  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:20:50.551103  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:20:50.551127  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:50.551133  734517 cri.go:89] found id: ""
	I1101 10:20:50.551144  734517 logs.go:282] 2 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718]
	I1101 10:20:50.551268  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:50.557085  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:50.563244  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:20:50.563372  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:20:50.597265  734517 cri.go:89] found id: ""
	I1101 10:20:50.597298  734517 logs.go:282] 0 containers: []
	W1101 10:20:50.597307  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:20:50.597316  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:20:50.597381  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:20:50.627911  734517 cri.go:89] found id: ""
	I1101 10:20:50.627935  734517 logs.go:282] 0 containers: []
	W1101 10:20:50.627946  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:20:50.627963  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:20:50.627977  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:20:50.665436  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:20:50.665476  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:50.725727  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:20:50.725780  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:20:50.755168  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:20:50.755205  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:20:50.788937  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:20:50.788968  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:20:50.887526  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:20:50.887566  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:20:50.908964  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:20:50.909003  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:20:50.992100  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:20:50.992127  734517 logs.go:123] Gathering logs for kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5] ...
	I1101 10:20:50.992146  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	W1101 10:20:51.037009  734517 logs.go:130] failed kube-apiserver [d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5": Process exited with status 1
	stdout:
	
	stderr:
	E1101 10:20:51.032467    5397 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5\": container with ID starting with d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5 not found: ID does not exist" containerID="d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	time="2025-11-01T10:20:51Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5\": container with ID starting with d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1101 10:20:51.032467    5397 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5\": container with ID starting with d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5 not found: ID does not exist" containerID="d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5"
	time="2025-11-01T10:20:51Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5\": container with ID starting with d5468004cf02084e9377cb37cdca52047329905e07ddebaad250d9eb4d6523a5 not found: ID does not exist"
	
	** /stderr **
	I1101 10:20:51.037041  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:20:51.037070  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:51.092787  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:20:51.092852  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:20:53.695296  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:20:53.695749  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:20:53.695810  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:20:53.695910  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:20:53.724710  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:20:53.724734  734517 cri.go:89] found id: ""
	I1101 10:20:53.724744  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:20:53.724806  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:53.729001  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:20:53.729075  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:20:53.756879  734517 cri.go:89] found id: ""
	I1101 10:20:53.756909  734517 logs.go:282] 0 containers: []
	W1101 10:20:53.756918  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:20:53.756924  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:20:53.756988  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:20:53.785736  734517 cri.go:89] found id: ""
	I1101 10:20:53.785768  734517 logs.go:282] 0 containers: []
	W1101 10:20:53.785779  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:20:53.785787  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:20:53.785871  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:20:53.814071  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:53.814093  734517 cri.go:89] found id: ""
	I1101 10:20:53.814105  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:20:53.814167  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:53.818323  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:20:53.818440  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:20:53.846797  734517 cri.go:89] found id: ""
	I1101 10:20:53.846825  734517 logs.go:282] 0 containers: []
	W1101 10:20:53.846848  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:20:53.846857  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:20:53.846930  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:20:53.875588  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:20:53.875611  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:53.875617  734517 cri.go:89] found id: ""
	I1101 10:20:53.875628  734517 logs.go:282] 2 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718]
	I1101 10:20:53.875690  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:53.880088  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:53.883892  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:20:53.883948  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:20:53.912327  734517 cri.go:89] found id: ""
	I1101 10:20:53.912352  734517 logs.go:282] 0 containers: []
	W1101 10:20:53.912360  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:20:53.912368  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:20:53.912423  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:20:53.940854  734517 cri.go:89] found id: ""
	I1101 10:20:53.940885  734517 logs.go:282] 0 containers: []
	W1101 10:20:53.940897  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:20:53.940913  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:20:53.940928  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:20:54.002365  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:20:54.002391  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:20:54.002409  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:20:54.037934  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:20:54.037969  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:54.090316  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:20:54.090362  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:20:54.119757  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:20:54.119791  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:20:54.180718  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:20:54.180767  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:20:54.214035  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:20:54.214078  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:20:54.234621  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:20:54.234670  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:54.269261  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:20:54.269296  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:20:56.371670  768708 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:20:56.371730  768708 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:20:56.371809  768708 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:20:56.371918  768708 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 10:20:56.371961  768708 kubeadm.go:319] OS: Linux
	I1101 10:20:56.372001  768708 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:20:56.372069  768708 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:20:56.372131  768708 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:20:56.372183  768708 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:20:56.372226  768708 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:20:56.372273  768708 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:20:56.372320  768708 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:20:56.372360  768708 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 10:20:56.372439  768708 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:20:56.372524  768708 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:20:56.372603  768708 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:20:56.372665  768708 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:20:56.373804  768708 out.go:252]   - Generating certificates and keys ...
	I1101 10:20:56.373907  768708 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:20:56.373970  768708 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:20:56.374030  768708 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:20:56.374081  768708 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:20:56.374140  768708 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:20:56.374192  768708 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:20:56.374239  768708 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:20:56.374349  768708 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-006653] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:20:56.374398  768708 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:20:56.374547  768708 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-006653] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:20:56.374648  768708 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:20:56.374762  768708 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:20:56.374811  768708 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:20:56.374894  768708 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:20:56.374957  768708 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:20:56.375011  768708 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:20:56.375057  768708 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:20:56.375116  768708 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:20:56.375185  768708 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:20:56.375261  768708 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:20:56.375318  768708 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:20:56.376452  768708 out.go:252]   - Booting up control plane ...
	I1101 10:20:56.376543  768708 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:20:56.376606  768708 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:20:56.376665  768708 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:20:56.376747  768708 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:20:56.376821  768708 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:20:56.376980  768708 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:20:56.377098  768708 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:20:56.377165  768708 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:20:56.377322  768708 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:20:56.377457  768708 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:20:56.377518  768708 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501994265s
	I1101 10:20:56.377597  768708 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:20:56.377678  768708 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 10:20:56.377761  768708 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:20:56.377833  768708 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:20:56.377959  768708 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.181616771s
	I1101 10:20:56.378064  768708 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.326559971s
	I1101 10:20:56.378156  768708 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001913442s
	I1101 10:20:56.378320  768708 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:20:56.378494  768708 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:20:56.378581  768708 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:20:56.378798  768708 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-006653 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:20:56.378874  768708 kubeadm.go:319] [bootstrap-token] Using token: cfcsg6.t4bipbr3v9caocw5
	I1101 10:20:56.379949  768708 out.go:252]   - Configuring RBAC rules ...
	I1101 10:20:56.380095  768708 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:20:56.380219  768708 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:20:56.380439  768708 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:20:56.380614  768708 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:20:56.380759  768708 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:20:56.380887  768708 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:20:56.381049  768708 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:20:56.381115  768708 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:20:56.381179  768708 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:20:56.381193  768708 kubeadm.go:319] 
	I1101 10:20:56.381277  768708 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:20:56.381287  768708 kubeadm.go:319] 
	I1101 10:20:56.381419  768708 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:20:56.381435  768708 kubeadm.go:319] 
	I1101 10:20:56.381464  768708 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:20:56.381544  768708 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:20:56.381607  768708 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:20:56.381620  768708 kubeadm.go:319] 
	I1101 10:20:56.381693  768708 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:20:56.381707  768708 kubeadm.go:319] 
	I1101 10:20:56.381761  768708 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:20:56.381770  768708 kubeadm.go:319] 
	I1101 10:20:56.381820  768708 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:20:56.381930  768708 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:20:56.382028  768708 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:20:56.382038  768708 kubeadm.go:319] 
	I1101 10:20:56.382144  768708 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:20:56.382251  768708 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:20:56.382260  768708 kubeadm.go:319] 
	I1101 10:20:56.382366  768708 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cfcsg6.t4bipbr3v9caocw5 \
	I1101 10:20:56.382520  768708 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 \
	I1101 10:20:56.382545  768708 kubeadm.go:319] 	--control-plane 
	I1101 10:20:56.382550  768708 kubeadm.go:319] 
	I1101 10:20:56.382622  768708 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:20:56.382629  768708 kubeadm.go:319] 
	I1101 10:20:56.382706  768708 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cfcsg6.t4bipbr3v9caocw5 \
	I1101 10:20:56.382822  768708 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9f0c006fdb1f0d4f57181834b563af818ccfb0533a9061e5422da6257b40a909 
	I1101 10:20:56.382857  768708 cni.go:84] Creating CNI manager for ""
	I1101 10:20:56.382872  768708 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:20:56.384014  768708 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1101 10:20:53.411898  764436 node_ready.go:57] node "default-k8s-diff-port-535119" has "Ready":"False" status (will retry)
	W1101 10:20:55.413383  764436 node_ready.go:57] node "default-k8s-diff-port-535119" has "Ready":"False" status (will retry)
	W1101 10:20:57.910537  764436 node_ready.go:57] node "default-k8s-diff-port-535119" has "Ready":"False" status (will retry)
	W1101 10:20:53.975002  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	W1101 10:20:55.975190  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	I1101 10:20:56.886109  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:20:56.886568  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:20:56.886631  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:20:56.886684  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:20:56.915412  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:20:56.915446  734517 cri.go:89] found id: ""
	I1101 10:20:56.915456  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:20:56.915522  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:56.920439  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:20:56.920517  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:20:56.951484  734517 cri.go:89] found id: ""
	I1101 10:20:56.951526  734517 logs.go:282] 0 containers: []
	W1101 10:20:56.951536  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:20:56.951542  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:20:56.951608  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:20:56.981205  734517 cri.go:89] found id: ""
	I1101 10:20:56.981234  734517 logs.go:282] 0 containers: []
	W1101 10:20:56.981244  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:20:56.981253  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:20:56.981311  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:20:57.009988  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:57.010013  734517 cri.go:89] found id: ""
	I1101 10:20:57.010024  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:20:57.010094  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:57.014641  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:20:57.014713  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:20:57.044898  734517 cri.go:89] found id: ""
	I1101 10:20:57.044925  734517 logs.go:282] 0 containers: []
	W1101 10:20:57.044933  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:20:57.044939  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:20:57.044990  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:20:57.075122  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:20:57.075150  734517 cri.go:89] found id: "37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:57.075156  734517 cri.go:89] found id: ""
	I1101 10:20:57.075165  734517 logs.go:282] 2 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718]
	I1101 10:20:57.075230  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:57.079828  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:20:57.083920  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:20:57.083993  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:20:57.113173  734517 cri.go:89] found id: ""
	I1101 10:20:57.113209  734517 logs.go:282] 0 containers: []
	W1101 10:20:57.113221  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:20:57.113229  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:20:57.113295  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:20:57.143747  734517 cri.go:89] found id: ""
	I1101 10:20:57.143779  734517 logs.go:282] 0 containers: []
	W1101 10:20:57.143791  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:20:57.143811  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:20:57.143832  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:20:57.238174  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:20:57.238213  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:20:57.306737  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:20:57.306766  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:20:57.306785  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:20:57.336538  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:20:57.336571  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:20:57.371801  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:20:57.371871  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:20:57.391980  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:20:57.392014  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:20:57.427503  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:20:57.427536  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:20:57.486915  734517 logs.go:123] Gathering logs for kube-controller-manager [37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718] ...
	I1101 10:20:57.486971  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37c327fe41b6a871f4913fa3b05a0c983b5fc7b866e972e8a638048066027718"
	I1101 10:20:57.516630  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:20:57.516664  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:20:56.384880  768708 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:20:56.389384  768708 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:20:56.389399  768708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:20:56.403378  768708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:20:56.627219  768708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:20:56.627379  768708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:56.627483  768708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-006653 minikube.k8s.io/updated_at=2025_11_01T10_20_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=newest-cni-006653 minikube.k8s.io/primary=true
	I1101 10:20:56.641443  768708 ops.go:34] apiserver oom_adj: -16
	I1101 10:20:56.718267  768708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:57.219085  768708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:57.718695  768708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:58.218998  768708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:58.718933  768708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:59.218896  768708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:20:59.719087  768708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:21:00.219108  768708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:21:00.719106  768708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:21:01.219223  768708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:21:01.295778  768708 kubeadm.go:1114] duration metric: took 4.668449869s to wait for elevateKubeSystemPrivileges
	I1101 10:21:01.295880  768708 kubeadm.go:403] duration metric: took 15.409395518s to StartCluster
	I1101 10:21:01.295909  768708 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:01.296010  768708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:21:01.298137  768708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:01.298504  768708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:21:01.298519  768708 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:21:01.298647  768708 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:21:01.298747  768708 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-006653"
	I1101 10:21:01.298774  768708 config.go:182] Loaded profile config "newest-cni-006653": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:01.298794  768708 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-006653"
	I1101 10:21:01.298782  768708 addons.go:70] Setting default-storageclass=true in profile "newest-cni-006653"
	I1101 10:21:01.298832  768708 host.go:66] Checking if "newest-cni-006653" exists ...
	I1101 10:21:01.298868  768708 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-006653"
	I1101 10:21:01.299384  768708 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:01.299437  768708 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:01.300039  768708 out.go:179] * Verifying Kubernetes components...
	I1101 10:21:01.302690  768708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:21:01.323021  768708 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:21:01.323861  768708 addons.go:239] Setting addon default-storageclass=true in "newest-cni-006653"
	I1101 10:21:01.323915  768708 host.go:66] Checking if "newest-cni-006653" exists ...
	I1101 10:21:01.324154  768708 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:21:01.324174  768708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:21:01.324249  768708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:01.324500  768708 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:01.354556  768708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:01.354647  768708 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:21:01.354791  768708 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:21:01.354899  768708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:01.392799  768708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:01.413812  768708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:21:01.495363  768708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:21:01.508905  768708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:21:01.528784  768708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:21:01.652744  768708 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 10:21:01.654653  768708 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:21:01.654739  768708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:21:01.865752  768708 api_server.go:72] duration metric: took 567.187599ms to wait for apiserver process to appear ...
	I1101 10:21:01.865789  768708 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:21:01.865813  768708 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:21:01.871355  768708 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:21:01.872508  768708 api_server.go:141] control plane version: v1.34.1
	I1101 10:21:01.872540  768708 api_server.go:131] duration metric: took 6.741759ms to wait for apiserver health ...
	I1101 10:21:01.872563  768708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:21:01.873560  768708 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:21:01.874525  768708 addons.go:515] duration metric: took 575.874135ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:21:01.876084  768708 system_pods.go:59] 8 kube-system pods found
	I1101 10:21:01.876114  768708 system_pods.go:61] "coredns-66bc5c9577-gn6zx" [a7bda15a-3bb6-4481-b103-cc8eed070995] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:21:01.876121  768708 system_pods.go:61] "etcd-newest-cni-006653" [e2c0df01-64cf-4a18-821f-527dddcf3772] Running
	I1101 10:21:01.876127  768708 system_pods.go:61] "kindnet-487js" [0400e397-aa86-4a6e-976e-ff1a3844727b] Running
	I1101 10:21:01.876133  768708 system_pods.go:61] "kube-apiserver-newest-cni-006653" [2bd8a1b8-97ce-4f57-90a9-e523107f3bc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:21:01.876144  768708 system_pods.go:61] "kube-controller-manager-newest-cni-006653" [b95204ce-cd11-470d-add1-5c7ca7f0494d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:21:01.876148  768708 system_pods.go:61] "kube-proxy-kp445" [ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b] Running
	I1101 10:21:01.876153  768708 system_pods.go:61] "kube-scheduler-newest-cni-006653" [431cf3e8-7ee3-4c54-8e86-21f4a7901987] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:21:01.876160  768708 system_pods.go:61] "storage-provisioner" [78945df3-ecd6-4d3d-aadb-3b0eb7fb8967] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:21:01.876168  768708 system_pods.go:74] duration metric: took 3.597419ms to wait for pod list to return data ...
	I1101 10:21:01.876179  768708 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:21:01.878824  768708 default_sa.go:45] found service account: "default"
	I1101 10:21:01.878877  768708 default_sa.go:55] duration metric: took 2.68793ms for default service account to be created ...
	I1101 10:21:01.878890  768708 kubeadm.go:587] duration metric: took 580.339191ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:21:01.878909  768708 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:21:01.881490  768708 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:21:01.881521  768708 node_conditions.go:123] node cpu capacity is 8
	I1101 10:21:01.881545  768708 node_conditions.go:105] duration metric: took 2.631678ms to run NodePressure ...
	I1101 10:21:01.881558  768708 start.go:242] waiting for startup goroutines ...
	I1101 10:21:02.159070  768708 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-006653" context rescaled to 1 replicas
	I1101 10:21:02.159114  768708 start.go:247] waiting for cluster config update ...
	I1101 10:21:02.159128  768708 start.go:256] writing updated cluster config ...
	I1101 10:21:02.159464  768708 ssh_runner.go:195] Run: rm -f paused
	I1101 10:21:02.214508  768708 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:21:02.216429  768708 out.go:179] * Done! kubectl is now configured to use "newest-cni-006653" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.354011453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.3626655Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=be353f0b-05c2-4c27-ab73-4ab35af286b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.368061654Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.370537893Z" level=info msg="Ran pod sandbox 6d596b3ac9e5910afdf6f51afdee62cc24821d4df60fe039948f12b2695e5761 with infra container: kube-system/kindnet-487js/POD" id=be353f0b-05c2-4c27-ab73-4ab35af286b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.37255885Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c1df9178-204f-4575-94c8-dae5d4b925d3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.376144009Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=88d14563-7014-4fb9-9077-3ea95f562b0c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.377257856Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.37902492Z" level=info msg="Ran pod sandbox 16475e56bee93913b51d20d9b2717078d42fbb4d22c6492dff6e215c61ba2948 with infra container: kube-system/kube-proxy-kp445/POD" id=c1df9178-204f-4575-94c8-dae5d4b925d3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.379346179Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a934de3d-a73b-497f-9435-547b694e3bd3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.380349669Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fd7cb503-3281-4f90-b8e4-21bd40879e63 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.381952496Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=5baf4a5e-791e-4782-b393-80963b7d4acb name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.383593927Z" level=info msg="Creating container: kube-system/kindnet-487js/kindnet-cni" id=a9a7a72a-eb9f-4894-88e4-48a548792eea name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.38373264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.385411049Z" level=info msg="Creating container: kube-system/kube-proxy-kp445/kube-proxy" id=43949cc4-00a4-4ffb-8bc0-dcfdd2e506ee name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.385567526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.395055792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.395828014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.401199205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.401869126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.538509368Z" level=info msg="Created container 262f5bc8fd0b8ce68123aa33c36b66b5e4968eb7eb485236e95fc6bf70f14a31: kube-system/kube-proxy-kp445/kube-proxy" id=43949cc4-00a4-4ffb-8bc0-dcfdd2e506ee name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.539503528Z" level=info msg="Created container 301921a200c754f8f02b9e4576e2a55ef5dfc82e73742ef7d9b1f8e896c7f5d9: kube-system/kindnet-487js/kindnet-cni" id=a9a7a72a-eb9f-4894-88e4-48a548792eea name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.54002573Z" level=info msg="Starting container: 262f5bc8fd0b8ce68123aa33c36b66b5e4968eb7eb485236e95fc6bf70f14a31" id=92b4e679-13d8-4c6a-91ba-ed8cc130a645 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.54014663Z" level=info msg="Starting container: 301921a200c754f8f02b9e4576e2a55ef5dfc82e73742ef7d9b1f8e896c7f5d9" id=7328f211-6f01-4459-b35c-6fb6ede136b8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.543231271Z" level=info msg="Started container" PID=1562 containerID=301921a200c754f8f02b9e4576e2a55ef5dfc82e73742ef7d9b1f8e896c7f5d9 description=kube-system/kindnet-487js/kindnet-cni id=7328f211-6f01-4459-b35c-6fb6ede136b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d596b3ac9e5910afdf6f51afdee62cc24821d4df60fe039948f12b2695e5761
	Nov 01 10:21:01 newest-cni-006653 crio[766]: time="2025-11-01T10:21:01.543996601Z" level=info msg="Started container" PID=1564 containerID=262f5bc8fd0b8ce68123aa33c36b66b5e4968eb7eb485236e95fc6bf70f14a31 description=kube-system/kube-proxy-kp445/kube-proxy id=92b4e679-13d8-4c6a-91ba-ed8cc130a645 name=/runtime.v1.RuntimeService/StartContainer sandboxID=16475e56bee93913b51d20d9b2717078d42fbb4d22c6492dff6e215c61ba2948
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	262f5bc8fd0b8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   16475e56bee93       kube-proxy-kp445                            kube-system
	301921a200c75       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   6d596b3ac9e59       kindnet-487js                               kube-system
	ea6dc4925ca44       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   aa99e771339f2       kube-apiserver-newest-cni-006653            kube-system
	41fa8cd1d4cf6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   9337963b10b10       kube-scheduler-newest-cni-006653            kube-system
	c6eea04061bc9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   059b1c6eda82d       kube-controller-manager-newest-cni-006653   kube-system
	82695190132fb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   b1fd21d84aa52       etcd-newest-cni-006653                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-006653
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-006653
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=newest-cni-006653
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_20_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:20:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-006653
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:20:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:20:55 +0000   Sat, 01 Nov 2025 10:20:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:20:55 +0000   Sat, 01 Nov 2025 10:20:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:20:55 +0000   Sat, 01 Nov 2025 10:20:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 10:20:55 +0000   Sat, 01 Nov 2025 10:20:51 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-006653
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                e2a07147-2430-4ed4-a07b-b804bc96d00e
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-006653                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-487js                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-006653             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-006653    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-kp445                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-006653             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 13s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-006653 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-006653 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-006653 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-006653 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-006653 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node newest-cni-006653 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-006653 event: Registered Node newest-cni-006653 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [82695190132fb708889f0548fd0fcdff6b7f7c5c31b3494748dc6d0d6ec9ff2a] <==
	{"level":"warn","ts":"2025-11-01T10:20:52.593734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.600312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.606774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.627062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.633569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.639641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.646780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.653482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.659878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.667595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.676054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.682043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.688289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.695722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.703276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.710043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.717498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.724074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.737700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.745616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.753154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.778187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.785489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.791929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:52.849601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51132","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:21:03 up  3:03,  0 user,  load average: 4.93, 3.80, 2.91
	Linux newest-cni-006653 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [301921a200c754f8f02b9e4576e2a55ef5dfc82e73742ef7d9b1f8e896c7f5d9] <==
	I1101 10:21:01.826930       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:21:01.827251       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:21:01.827379       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:21:01.827394       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:21:01.827417       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:21:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:21:02.037320       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:21:02.037404       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:21:02.037425       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:21:02.037597       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:21:02.737714       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:21:02.737747       1 metrics.go:72] Registering metrics
	I1101 10:21:02.737894       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [ea6dc4925ca446fde1a66e44fdb7d97a69eeae2da3937399fd4b4a48f9d202a8] <==
	I1101 10:20:53.332962       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1101 10:20:53.333748       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:20:53.338072       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:20:53.341153       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:20:53.342922       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:20:53.343331       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:20:53.349038       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:20:53.530392       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:20:54.237096       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:20:54.241197       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:20:54.241217       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:20:54.701635       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:20:54.736782       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:20:54.840428       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:20:54.846264       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 10:20:54.847281       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:20:54.851358       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:20:55.271918       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:20:55.772421       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:20:55.781652       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:20:55.789573       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:21:01.025038       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1101 10:21:01.125900       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:21:01.129825       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:21:01.387993       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c6eea04061bc9ca86c5f378ca7647da86af8ad777498b8080746a14b6070c13c] <==
	I1101 10:21:00.271799       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:21:00.271811       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:21:00.271856       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:21:00.272149       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:21:00.272158       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:21:00.272172       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:21:00.273113       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:21:00.277053       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:21:00.277117       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:21:00.277182       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:21:00.277200       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:21:00.277208       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:21:00.277496       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:21:00.277680       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:21:00.277701       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:21:00.277709       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:21:00.280446       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:21:00.284914       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:21:00.286092       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-006653" podCIDRs=["10.42.0.0/24"]
	I1101 10:21:00.293193       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:21:00.294226       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:21:00.294389       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:21:00.294543       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-006653"
	I1101 10:21:00.294615       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:21:00.300251       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-proxy [262f5bc8fd0b8ce68123aa33c36b66b5e4968eb7eb485236e95fc6bf70f14a31] <==
	I1101 10:21:01.605488       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:21:01.689645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:21:01.789944       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:21:01.789998       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:21:01.790132       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:21:01.812488       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:21:01.812561       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:21:01.819486       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:21:01.819863       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:21:01.819884       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:21:01.821368       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:21:01.821393       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:21:01.821414       1 config.go:200] "Starting service config controller"
	I1101 10:21:01.821428       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:21:01.821442       1 config.go:309] "Starting node config controller"
	I1101 10:21:01.821449       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:21:01.821509       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:21:01.821520       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:21:01.922027       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:21:01.922087       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:21:01.922104       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:21:01.922093       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [41fa8cd1d4cf62f41871f4a72660d9096246ae617c6bf23e70ae08fa324724c3] <==
	E1101 10:20:53.281570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:20:53.281588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:20:53.281601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:20:53.281651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:20:53.281674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:20:53.281698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:20:53.281707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:20:53.281744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:20:53.281759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:20:53.281796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:20:53.281824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:20:53.281871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:20:54.159133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:20:54.163515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:20:54.222339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:20:54.251573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:20:54.290475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:20:54.303745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:20:54.337994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:20:54.399489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:20:54.426615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:20:54.449718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:20:54.538318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:20:54.551604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 10:20:57.179210       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:20:56 newest-cni-006653 kubelet[1293]: I1101 10:20:56.602526    1293 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:20:56 newest-cni-006653 kubelet[1293]: I1101 10:20:56.641158    1293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-006653"
	Nov 01 10:20:56 newest-cni-006653 kubelet[1293]: I1101 10:20:56.641365    1293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-006653"
	Nov 01 10:20:56 newest-cni-006653 kubelet[1293]: I1101 10:20:56.641560    1293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-006653"
	Nov 01 10:20:56 newest-cni-006653 kubelet[1293]: I1101 10:20:56.641496    1293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-006653"
	Nov 01 10:20:56 newest-cni-006653 kubelet[1293]: E1101 10:20:56.650983    1293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-006653\" already exists" pod="kube-system/etcd-newest-cni-006653"
	Nov 01 10:20:56 newest-cni-006653 kubelet[1293]: E1101 10:20:56.651613    1293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-006653\" already exists" pod="kube-system/kube-scheduler-newest-cni-006653"
	Nov 01 10:20:56 newest-cni-006653 kubelet[1293]: E1101 10:20:56.651763    1293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-006653\" already exists" pod="kube-system/kube-controller-manager-newest-cni-006653"
	Nov 01 10:20:56 newest-cni-006653 kubelet[1293]: E1101 10:20:56.651697    1293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-006653\" already exists" pod="kube-system/kube-apiserver-newest-cni-006653"
	Nov 01 10:20:56 newest-cni-006653 kubelet[1293]: I1101 10:20:56.680668    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-006653" podStartSLOduration=1.680644512 podStartE2EDuration="1.680644512s" podCreationTimestamp="2025-11-01 10:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:56.669948021 +0000 UTC m=+1.136908905" watchObservedRunningTime="2025-11-01 10:20:56.680644512 +0000 UTC m=+1.147605378"
	Nov 01 10:20:56 newest-cni-006653 kubelet[1293]: I1101 10:20:56.691067    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-006653" podStartSLOduration=1.6910454590000001 podStartE2EDuration="1.691045459s" podCreationTimestamp="2025-11-01 10:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:56.68085171 +0000 UTC m=+1.147812586" watchObservedRunningTime="2025-11-01 10:20:56.691045459 +0000 UTC m=+1.158006344"
	Nov 01 10:20:56 newest-cni-006653 kubelet[1293]: I1101 10:20:56.701630    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-006653" podStartSLOduration=1.701604812 podStartE2EDuration="1.701604812s" podCreationTimestamp="2025-11-01 10:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:56.70159471 +0000 UTC m=+1.168555592" watchObservedRunningTime="2025-11-01 10:20:56.701604812 +0000 UTC m=+1.168565694"
	Nov 01 10:20:56 newest-cni-006653 kubelet[1293]: I1101 10:20:56.701922    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-006653" podStartSLOduration=1.7019000850000001 podStartE2EDuration="1.701900085s" podCreationTimestamp="2025-11-01 10:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:56.690961116 +0000 UTC m=+1.157921999" watchObservedRunningTime="2025-11-01 10:20:56.701900085 +0000 UTC m=+1.168860969"
	Nov 01 10:21:00 newest-cni-006653 kubelet[1293]: I1101 10:21:00.348233    1293 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 10:21:00 newest-cni-006653 kubelet[1293]: I1101 10:21:00.349134    1293 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 10:21:01 newest-cni-006653 kubelet[1293]: I1101 10:21:01.143255    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b-kube-proxy\") pod \"kube-proxy-kp445\" (UID: \"ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b\") " pod="kube-system/kube-proxy-kp445"
	Nov 01 10:21:01 newest-cni-006653 kubelet[1293]: I1101 10:21:01.143305    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0400e397-aa86-4a6e-976e-ff1a3844727b-xtables-lock\") pod \"kindnet-487js\" (UID: \"0400e397-aa86-4a6e-976e-ff1a3844727b\") " pod="kube-system/kindnet-487js"
	Nov 01 10:21:01 newest-cni-006653 kubelet[1293]: I1101 10:21:01.143323    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh9t5\" (UniqueName: \"kubernetes.io/projected/ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b-kube-api-access-gh9t5\") pod \"kube-proxy-kp445\" (UID: \"ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b\") " pod="kube-system/kube-proxy-kp445"
	Nov 01 10:21:01 newest-cni-006653 kubelet[1293]: I1101 10:21:01.143356    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0400e397-aa86-4a6e-976e-ff1a3844727b-cni-cfg\") pod \"kindnet-487js\" (UID: \"0400e397-aa86-4a6e-976e-ff1a3844727b\") " pod="kube-system/kindnet-487js"
	Nov 01 10:21:01 newest-cni-006653 kubelet[1293]: I1101 10:21:01.143380    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b-xtables-lock\") pod \"kube-proxy-kp445\" (UID: \"ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b\") " pod="kube-system/kube-proxy-kp445"
	Nov 01 10:21:01 newest-cni-006653 kubelet[1293]: I1101 10:21:01.143401    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfqc2\" (UniqueName: \"kubernetes.io/projected/0400e397-aa86-4a6e-976e-ff1a3844727b-kube-api-access-nfqc2\") pod \"kindnet-487js\" (UID: \"0400e397-aa86-4a6e-976e-ff1a3844727b\") " pod="kube-system/kindnet-487js"
	Nov 01 10:21:01 newest-cni-006653 kubelet[1293]: I1101 10:21:01.143427    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b-lib-modules\") pod \"kube-proxy-kp445\" (UID: \"ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b\") " pod="kube-system/kube-proxy-kp445"
	Nov 01 10:21:01 newest-cni-006653 kubelet[1293]: I1101 10:21:01.143453    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0400e397-aa86-4a6e-976e-ff1a3844727b-lib-modules\") pod \"kindnet-487js\" (UID: \"0400e397-aa86-4a6e-976e-ff1a3844727b\") " pod="kube-system/kindnet-487js"
	Nov 01 10:21:01 newest-cni-006653 kubelet[1293]: I1101 10:21:01.686328    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kp445" podStartSLOduration=0.686302274 podStartE2EDuration="686.302274ms" podCreationTimestamp="2025-11-01 10:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:21:01.674581112 +0000 UTC m=+6.141541996" watchObservedRunningTime="2025-11-01 10:21:01.686302274 +0000 UTC m=+6.153263159"
	Nov 01 10:21:01 newest-cni-006653 kubelet[1293]: I1101 10:21:01.702697    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-487js" podStartSLOduration=0.702675654 podStartE2EDuration="702.675654ms" podCreationTimestamp="2025-11-01 10:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:21:01.702398327 +0000 UTC m=+6.169359212" watchObservedRunningTime="2025-11-01 10:21:01.702675654 +0000 UTC m=+6.169636537"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-006653 -n newest-cni-006653
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-006653 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gn6zx storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-006653 describe pod coredns-66bc5c9577-gn6zx storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-006653 describe pod coredns-66bc5c9577-gn6zx storage-provisioner: exit status 1 (68.885422ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gn6zx" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-006653 describe pod coredns-66bc5c9577-gn6zx storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-535119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-535119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (349.802359ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:21:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-535119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-535119 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-535119 describe deploy/metrics-server -n kube-system: exit status 1 (87.660937ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-535119 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-535119
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-535119:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9",
	        "Created": "2025-11-01T10:20:27.432288023Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 765935,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:20:27.47519397Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9/hostname",
	        "HostsPath": "/var/lib/docker/containers/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9/hosts",
	        "LogPath": "/var/lib/docker/containers/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9-json.log",
	        "Name": "/default-k8s-diff-port-535119",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-535119:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-535119",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9",
	                "LowerDir": "/var/lib/docker/overlay2/e9a0c3ffe8511d599910c2afa408a05e6eafb69152218c2a88b5d554575b9de6-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e9a0c3ffe8511d599910c2afa408a05e6eafb69152218c2a88b5d554575b9de6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e9a0c3ffe8511d599910c2afa408a05e6eafb69152218c2a88b5d554575b9de6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e9a0c3ffe8511d599910c2afa408a05e6eafb69152218c2a88b5d554575b9de6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-535119",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-535119/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-535119",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-535119",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-535119",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8ae33477d70b85448f5f424dd2424a9c08832ea67070929bb696cdcc9ce0379d",
	            "SandboxKey": "/var/run/docker/netns/8ae33477d70b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33198"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-535119": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:89:06:09:eb:73",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "adb717c923a7eb081a40be81c8474558a336c362715ad5409671064c3146fad7",
	                    "EndpointID": "b2dd128201268c63bf15d349cce523b39995f860475d02c36aee324ae71fa569",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-535119",
	                        "709c1dd68365"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-535119 -n default-k8s-diff-port-535119
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-535119 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-535119 logs -n 25: (1.417565247s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-680879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ stop    │ -p no-preload-680879 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-556573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p no-preload-680879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ old-k8s-version-556573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p old-k8s-version-556573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ no-preload-680879 image list --format=json                                                                                                                                                                                                    │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p no-preload-680879 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p embed-certs-678014 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-678014           │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p no-preload-680879                                                                                                                                                                                                                          │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p no-preload-680879                                                                                                                                                                                                                          │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p disable-driver-mounts-083568                                                                                                                                                                                                               │ disable-driver-mounts-083568 │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p default-k8s-diff-port-535119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:21 UTC │
	│ start   │ -p cert-expiration-577441 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-577441       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p cert-expiration-577441                                                                                                                                                                                                                     │ cert-expiration-577441       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p newest-cni-006653 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-006653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	│ stop    │ -p newest-cni-006653 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ addons  │ enable dashboard -p newest-cni-006653 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ start   │ -p newest-cni-006653 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-535119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:21:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:21:07.368818  775345 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:21:07.368991  775345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:21:07.369005  775345 out.go:374] Setting ErrFile to fd 2...
	I1101 10:21:07.369011  775345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:21:07.369282  775345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:21:07.369804  775345 out.go:368] Setting JSON to false
	I1101 10:21:07.372138  775345 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11004,"bootTime":1761981463,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:21:07.372273  775345 start.go:143] virtualization: kvm guest
	I1101 10:21:07.374034  775345 out.go:179] * [newest-cni-006653] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:21:07.375251  775345 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:21:07.375276  775345 notify.go:221] Checking for updates...
	I1101 10:21:07.377188  775345 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:21:07.378236  775345 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:21:07.379230  775345 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:21:07.380231  775345 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:21:07.381285  775345 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:21:07.382730  775345 config.go:182] Loaded profile config "newest-cni-006653": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:07.383283  775345 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:21:07.412824  775345 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:21:07.413000  775345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:21:07.477031  775345 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:21:07.464068959 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:21:07.477164  775345 docker.go:319] overlay module found
	I1101 10:21:07.479294  775345 out.go:179] * Using the docker driver based on existing profile
	I1101 10:21:07.480228  775345 start.go:309] selected driver: docker
	I1101 10:21:07.480246  775345 start.go:930] validating driver "docker" against &{Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:21:07.480361  775345 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:21:07.481141  775345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:21:07.547108  775345 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:21:07.535480294 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:21:07.547439  775345 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:21:07.547475  775345 cni.go:84] Creating CNI manager for ""
	I1101 10:21:07.547541  775345 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:21:07.547651  775345 start.go:353] cluster config:
	{Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:21:07.549748  775345 out.go:179] * Starting "newest-cni-006653" primary control-plane node in "newest-cni-006653" cluster
	I1101 10:21:07.550569  775345 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:21:07.551613  775345 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:21:07.552531  775345 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:21:07.552589  775345 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:21:07.552604  775345 cache.go:59] Caching tarball of preloaded images
	I1101 10:21:07.552645  775345 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:21:07.552722  775345 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:21:07.552741  775345 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:21:07.552950  775345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/config.json ...
	I1101 10:21:07.577411  775345 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:21:07.577438  775345 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:21:07.577479  775345 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:21:07.577518  775345 start.go:360] acquireMachinesLock for newest-cni-006653: {Name:mkf496d0b80c7855406646357bd774886a0856a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:21:07.577606  775345 start.go:364] duration metric: took 56.04µs to acquireMachinesLock for "newest-cni-006653"
	I1101 10:21:07.577634  775345 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:21:07.577646  775345 fix.go:54] fixHost starting: 
	I1101 10:21:07.577966  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:07.598527  775345 fix.go:112] recreateIfNeeded on newest-cni-006653: state=Stopped err=<nil>
	W1101 10:21:07.598568  775345 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:21:05.475588  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	W1101 10:21:07.475900  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	I1101 10:21:06.515156  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:06.516003  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:06.516072  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:06.516133  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:06.558213  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:06.558244  734517 cri.go:89] found id: ""
	I1101 10:21:06.558259  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:06.558332  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:06.564141  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:06.564236  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:06.600088  734517 cri.go:89] found id: ""
	I1101 10:21:06.600122  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.600134  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:06.600142  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:06.600216  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:06.638676  734517 cri.go:89] found id: ""
	I1101 10:21:06.638722  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.638734  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:06.638744  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:06.638815  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:06.676103  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:06.676133  734517 cri.go:89] found id: ""
	I1101 10:21:06.676144  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:06.676203  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:06.681722  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:06.681799  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:06.719509  734517 cri.go:89] found id: ""
	I1101 10:21:06.719543  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.719554  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:06.719563  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:06.719637  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:06.752396  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:06.752533  734517 cri.go:89] found id: ""
	I1101 10:21:06.752545  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:06.752603  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:06.757697  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:06.757763  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:06.790052  734517 cri.go:89] found id: ""
	I1101 10:21:06.790091  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.790103  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:06.790113  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:06.790186  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:21:06.821398  734517 cri.go:89] found id: ""
	I1101 10:21:06.821436  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.821450  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:21:06.821475  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:21:06.821495  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:06.853392  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:21:06.853425  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:21:06.912616  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:21:06.912661  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:21:06.947720  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:21:06.947759  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:21:07.058980  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:21:07.059023  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:21:07.080200  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:21:07.080238  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:21:07.150168  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:21:07.150197  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:21:07.150221  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:07.191996  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:21:07.192035  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:09.754304  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:09.754761  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:09.754823  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:09.754892  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:09.787037  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:09.787065  734517 cri.go:89] found id: ""
	I1101 10:21:09.787074  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:09.787139  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:09.791637  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:09.791724  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:09.822734  734517 cri.go:89] found id: ""
	I1101 10:21:09.822762  734517 logs.go:282] 0 containers: []
	W1101 10:21:09.822772  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:09.822778  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:09.822827  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:07.600177  775345 out.go:252] * Restarting existing docker container for "newest-cni-006653" ...
	I1101 10:21:07.600261  775345 cli_runner.go:164] Run: docker start newest-cni-006653
	I1101 10:21:07.887226  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:07.907473  775345 kic.go:430] container "newest-cni-006653" state is running.
	I1101 10:21:07.908052  775345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-006653
	I1101 10:21:07.930273  775345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/config.json ...
	I1101 10:21:07.930608  775345 machine.go:94] provisionDockerMachine start ...
	I1101 10:21:07.930697  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:07.950868  775345 main.go:143] libmachine: Using SSH client type: native
	I1101 10:21:07.951192  775345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1101 10:21:07.951214  775345 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:21:07.951934  775345 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59284->127.0.0.1:33208: read: connection reset by peer
	I1101 10:21:11.101533  775345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-006653
	
	I1101 10:21:11.101567  775345 ubuntu.go:182] provisioning hostname "newest-cni-006653"
	I1101 10:21:11.101627  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:11.121992  775345 main.go:143] libmachine: Using SSH client type: native
	I1101 10:21:11.122272  775345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1101 10:21:11.122293  775345 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-006653 && echo "newest-cni-006653" | sudo tee /etc/hostname
	I1101 10:21:11.278332  775345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-006653
	
	I1101 10:21:11.278417  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:11.298025  775345 main.go:143] libmachine: Using SSH client type: native
	I1101 10:21:11.298366  775345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1101 10:21:11.298396  775345 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-006653' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-006653/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-006653' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:21:11.446407  775345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:21:11.446446  775345 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:21:11.446476  775345 ubuntu.go:190] setting up certificates
	I1101 10:21:11.446494  775345 provision.go:84] configureAuth start
	I1101 10:21:11.446585  775345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-006653
	I1101 10:21:11.467021  775345 provision.go:143] copyHostCerts
	I1101 10:21:11.467089  775345 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:21:11.467107  775345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:21:11.467188  775345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:21:11.467319  775345 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:21:11.467328  775345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:21:11.467356  775345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:21:11.467431  775345 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:21:11.467438  775345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:21:11.467464  775345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:21:11.467535  775345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.newest-cni-006653 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-006653]
	I1101 10:21:11.656041  775345 provision.go:177] copyRemoteCerts
	I1101 10:21:11.656114  775345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:21:11.656155  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:11.675562  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:11.780483  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:21:11.801492  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:21:11.822639  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:21:11.844599  775345 provision.go:87] duration metric: took 398.086986ms to configureAuth
	I1101 10:21:11.844629  775345 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:21:11.844827  775345 config.go:182] Loaded profile config "newest-cni-006653": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:11.844986  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:11.865032  775345 main.go:143] libmachine: Using SSH client type: native
	I1101 10:21:11.865396  775345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1101 10:21:11.865423  775345 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:21:12.151927  775345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:21:12.151959  775345 machine.go:97] duration metric: took 4.221331346s to provisionDockerMachine
	I1101 10:21:12.151974  775345 start.go:293] postStartSetup for "newest-cni-006653" (driver="docker")
	I1101 10:21:12.151984  775345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:21:12.152046  775345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:21:12.152087  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:12.172073  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:12.276880  775345 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:21:12.281085  775345 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:21:12.281117  775345 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:21:12.281130  775345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:21:12.281178  775345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:21:12.281267  775345 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:21:12.281363  775345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:21:12.289865  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:21:12.310993  775345 start.go:296] duration metric: took 159.002326ms for postStartSetup
	I1101 10:21:12.311102  775345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:21:12.311149  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:12.330337  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	W1101 10:21:09.974921  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	W1101 10:21:12.475062  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	I1101 10:21:12.430860  775345 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:21:12.436672  775345 fix.go:56] duration metric: took 4.859015473s for fixHost
	I1101 10:21:12.436705  775345 start.go:83] releasing machines lock for "newest-cni-006653", held for 4.859082301s
	I1101 10:21:12.436786  775345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-006653
	I1101 10:21:12.456783  775345 ssh_runner.go:195] Run: cat /version.json
	I1101 10:21:12.456896  775345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:21:12.456902  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:12.457005  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:12.477799  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:12.478095  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:12.637349  775345 ssh_runner.go:195] Run: systemctl --version
	I1101 10:21:12.645138  775345 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:21:12.685879  775345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:21:12.691371  775345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:21:12.691434  775345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:21:12.700901  775345 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:21:12.700930  775345 start.go:496] detecting cgroup driver to use...
	I1101 10:21:12.700976  775345 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:21:12.701037  775345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:21:12.717316  775345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:21:12.733635  775345 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:21:12.733689  775345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:21:12.750497  775345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:21:12.767331  775345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:21:12.854808  775345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:21:12.938672  775345 docker.go:234] disabling docker service ...
	I1101 10:21:12.938746  775345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:21:12.957137  775345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:21:12.972571  775345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:21:13.074081  775345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:21:13.169823  775345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:21:13.184846  775345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:21:13.204139  775345 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:21:13.204216  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.215765  775345 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:21:13.215867  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.227103  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.238022  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.249272  775345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:21:13.259995  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.271255  775345 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.282311  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.294977  775345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:21:13.304502  775345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:21:13.313752  775345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:21:13.405995  775345 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:21:13.532643  775345 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:21:13.532727  775345 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:21:13.537752  775345 start.go:564] Will wait 60s for crictl version
	I1101 10:21:13.537818  775345 ssh_runner.go:195] Run: which crictl
	I1101 10:21:13.541787  775345 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:21:13.571974  775345 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:21:13.572085  775345 ssh_runner.go:195] Run: crio --version
	I1101 10:21:13.608295  775345 ssh_runner.go:195] Run: crio --version
	I1101 10:21:13.643017  775345 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:21:13.643996  775345 cli_runner.go:164] Run: docker network inspect newest-cni-006653 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:21:13.662889  775345 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:21:13.667996  775345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:21:13.681178  775345 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:21:09.860041  734517 cri.go:89] found id: ""
	I1101 10:21:09.860070  734517 logs.go:282] 0 containers: []
	W1101 10:21:09.860080  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:09.860089  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:09.860142  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:09.890661  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:09.890692  734517 cri.go:89] found id: ""
	I1101 10:21:09.890705  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:09.890778  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:09.895701  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:09.895778  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:09.927449  734517 cri.go:89] found id: ""
	I1101 10:21:09.927477  734517 logs.go:282] 0 containers: []
	W1101 10:21:09.927488  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:09.927505  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:09.927570  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:09.959698  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:09.959729  734517 cri.go:89] found id: ""
	I1101 10:21:09.959742  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:09.959803  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:09.964405  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:09.964502  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:09.995953  734517 cri.go:89] found id: ""
	I1101 10:21:09.995991  734517 logs.go:282] 0 containers: []
	W1101 10:21:09.996004  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:09.996015  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:09.996073  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:21:10.030085  734517 cri.go:89] found id: ""
	I1101 10:21:10.030117  734517 logs.go:282] 0 containers: []
	W1101 10:21:10.030126  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:21:10.030139  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:21:10.030154  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:10.060407  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:21:10.060441  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:21:10.117644  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:21:10.117690  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:21:10.152178  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:21:10.152207  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:21:10.242540  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:21:10.242598  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:21:10.263401  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:21:10.263441  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:21:10.324595  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:21:10.324617  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:21:10.324633  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:10.362674  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:21:10.362718  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:12.922943  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:12.923478  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:12.923551  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:12.923612  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:12.957773  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:12.957793  734517 cri.go:89] found id: ""
	I1101 10:21:12.957801  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:12.957878  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:12.962381  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:12.962483  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:12.995296  734517 cri.go:89] found id: ""
	I1101 10:21:12.995333  734517 logs.go:282] 0 containers: []
	W1101 10:21:12.995344  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:12.995352  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:12.995430  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:13.033380  734517 cri.go:89] found id: ""
	I1101 10:21:13.033414  734517 logs.go:282] 0 containers: []
	W1101 10:21:13.033426  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:13.033435  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:13.033506  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:13.064948  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:13.064970  734517 cri.go:89] found id: ""
	I1101 10:21:13.064979  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:13.065041  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:13.069789  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:13.069887  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:13.100580  734517 cri.go:89] found id: ""
	I1101 10:21:13.100614  734517 logs.go:282] 0 containers: []
	W1101 10:21:13.100626  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:13.100635  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:13.100686  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:13.136326  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:13.136359  734517 cri.go:89] found id: ""
	I1101 10:21:13.136370  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:13.136429  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:13.141519  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:13.141623  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:13.174096  734517 cri.go:89] found id: ""
	I1101 10:21:13.174121  734517 logs.go:282] 0 containers: []
	W1101 10:21:13.174130  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:13.174137  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:13.174185  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:21:13.207618  734517 cri.go:89] found id: ""
	I1101 10:21:13.207650  734517 logs.go:282] 0 containers: []
	W1101 10:21:13.207662  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:21:13.207676  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:21:13.207692  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:21:13.228225  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:21:13.228269  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:21:13.296888  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:21:13.296924  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:21:13.296945  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:13.334981  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:21:13.335028  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:13.397890  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:21:13.397936  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:13.430702  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:21:13.430732  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:21:13.495394  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:21:13.495444  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:21:13.533429  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:21:13.533456  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:21:13.682134  775345 kubeadm.go:884] updating cluster {Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:21:13.682285  775345 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:21:13.682351  775345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:21:13.719917  775345 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:21:13.719941  775345 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:21:13.719997  775345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:21:13.749397  775345 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:21:13.749421  775345 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:21:13.749429  775345 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:21:13.749550  775345 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-006653 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:21:13.749653  775345 ssh_runner.go:195] Run: crio config
	I1101 10:21:13.802432  775345 cni.go:84] Creating CNI manager for ""
	I1101 10:21:13.802462  775345 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:21:13.802489  775345 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:21:13.802551  775345 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-006653 NodeName:newest-cni-006653 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:21:13.802705  775345 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-006653"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:21:13.802774  775345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:21:13.812295  775345 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:21:13.812378  775345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:21:13.821815  775345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:21:13.837568  775345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:21:13.852297  775345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1101 10:21:13.866722  775345 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:21:13.871100  775345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:21:13.882942  775345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:21:13.967554  775345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:21:13.993768  775345 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653 for IP: 192.168.76.2
	I1101 10:21:13.993792  775345 certs.go:195] generating shared ca certs ...
	I1101 10:21:13.993815  775345 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:13.994012  775345 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:21:13.994053  775345 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:21:13.994061  775345 certs.go:257] generating profile certs ...
	I1101 10:21:13.994169  775345 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/client.key
	I1101 10:21:13.994235  775345 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.key.c43daf58
	I1101 10:21:13.994270  775345 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.key
	I1101 10:21:13.994378  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:21:13.994412  775345 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:21:13.994422  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:21:13.994446  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:21:13.994467  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:21:13.994494  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:21:13.994533  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:21:13.995177  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:21:14.017811  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:21:14.041370  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:21:14.063070  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:21:14.090442  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:21:14.111563  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:21:14.132592  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:21:14.152885  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:21:14.173513  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:21:14.194543  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:21:14.215737  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:21:14.237400  775345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:21:14.252487  775345 ssh_runner.go:195] Run: openssl version
	I1101 10:21:14.260121  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:21:14.271081  775345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:21:14.276116  775345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:21:14.276186  775345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:21:14.313235  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:21:14.323271  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:21:14.334255  775345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:21:14.339072  775345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:21:14.339149  775345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:21:14.377267  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:21:14.387359  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:21:14.398061  775345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:21:14.402635  775345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:21:14.402717  775345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:21:14.440665  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:21:14.451644  775345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:21:14.456568  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:21:14.497718  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:21:14.545689  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:21:14.597289  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:21:14.650890  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:21:14.703137  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:21:14.742240  775345 kubeadm.go:401] StartCluster: {Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:21:14.742382  775345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:21:14.742487  775345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:21:14.779439  775345 cri.go:89] found id: "7c09ddecdeca46ff3ec1552a8c119fc453d012084c77937d37039c7713b8515b"
	I1101 10:21:14.779467  775345 cri.go:89] found id: "922955453c81342bf231488bc1c4788ba0de975b4453762ada023b741185a144"
	I1101 10:21:14.779473  775345 cri.go:89] found id: "c7f1e1f3c53e69773b4e36a83142cc7f8552cca4f888399d85ba1875b5ebf29f"
	I1101 10:21:14.779477  775345 cri.go:89] found id: "49e471af6c5f092029c6717bae1e37da0b4381d85dfad7b5da552c19d207269c"
	I1101 10:21:14.779495  775345 cri.go:89] found id: ""
	I1101 10:21:14.779547  775345 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:21:14.798690  775345 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:21:14Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:21:14.798775  775345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:21:14.810055  775345 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:21:14.810075  775345 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:21:14.810127  775345 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:21:14.821271  775345 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:21:14.822995  775345 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-006653" does not appear in /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:21:14.823931  775345 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-514161/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-006653" cluster setting kubeconfig missing "newest-cni-006653" context setting]
	I1101 10:21:14.825362  775345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:14.828027  775345 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:21:14.840116  775345 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:21:14.840164  775345 kubeadm.go:602] duration metric: took 30.082653ms to restartPrimaryControlPlane
	I1101 10:21:14.840178  775345 kubeadm.go:403] duration metric: took 97.950111ms to StartCluster
	I1101 10:21:14.840202  775345 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:14.840292  775345 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:21:14.842793  775345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:14.843615  775345 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:21:14.843831  775345 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:21:14.843950  775345 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-006653"
	I1101 10:21:14.843973  775345 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-006653"
	W1101 10:21:14.843985  775345 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:21:14.844018  775345 host.go:66] Checking if "newest-cni-006653" exists ...
	I1101 10:21:14.844087  775345 addons.go:70] Setting dashboard=true in profile "newest-cni-006653"
	I1101 10:21:14.844108  775345 addons.go:239] Setting addon dashboard=true in "newest-cni-006653"
	W1101 10:21:14.844115  775345 addons.go:248] addon dashboard should already be in state true
	I1101 10:21:14.844139  775345 host.go:66] Checking if "newest-cni-006653" exists ...
	I1101 10:21:14.843915  775345 config.go:182] Loaded profile config "newest-cni-006653": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:14.844318  775345 addons.go:70] Setting default-storageclass=true in profile "newest-cni-006653"
	I1101 10:21:14.844352  775345 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-006653"
	I1101 10:21:14.844561  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:14.844561  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:14.844727  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:14.847357  775345 out.go:179] * Verifying Kubernetes components...
	I1101 10:21:14.849672  775345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:21:14.877374  775345 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:21:14.878757  775345 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:21:14.878783  775345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:21:14.878934  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:14.879347  775345 addons.go:239] Setting addon default-storageclass=true in "newest-cni-006653"
	W1101 10:21:14.879369  775345 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:21:14.879400  775345 host.go:66] Checking if "newest-cni-006653" exists ...
	I1101 10:21:14.879894  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:14.883519  775345 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:21:14.884583  775345 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Nov 01 10:21:02 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:02.6411512Z" level=info msg="Starting container: abc9071641bea332b4abde81670ff54a9c7862acfcee7167c8768ae83eeaddb2" id=eed17f6a-3ece-4d28-9fe9-ce16d90ed061 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:21:02 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:02.64399262Z" level=info msg="Started container" PID=1841 containerID=abc9071641bea332b4abde81670ff54a9c7862acfcee7167c8768ae83eeaddb2 description=kube-system/coredns-66bc5c9577-c4s2q/coredns id=eed17f6a-3ece-4d28-9fe9-ce16d90ed061 name=/runtime.v1.RuntimeService/StartContainer sandboxID=291c5ea23e10f07e8e369e13b8d335af6bd565fefa5969608988edc787a8414d
	Nov 01 10:21:06 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:06.055009585Z" level=info msg="Running pod sandbox: default/busybox/POD" id=42a45bce-4373-4da5-b567-347e007e2f44 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:06 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:06.055171854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:06 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:06.061307663Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1cf843e898bced1c3bb6f5bde0e575bb28e07308c777024f04c77080f4447af4 UID:cae18218-eb25-4d8d-ba04-f9e73dda2131 NetNS:/var/run/netns/1792f7ae-d879-444c-a599-e4c983372a61 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002842e8}] Aliases:map[]}"
	Nov 01 10:21:06 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:06.061343592Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:21:06 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:06.071727674Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1cf843e898bced1c3bb6f5bde0e575bb28e07308c777024f04c77080f4447af4 UID:cae18218-eb25-4d8d-ba04-f9e73dda2131 NetNS:/var/run/netns/1792f7ae-d879-444c-a599-e4c983372a61 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002842e8}] Aliases:map[]}"
	Nov 01 10:21:06 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:06.071904299Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:21:06 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:06.072897645Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:21:06 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:06.073758858Z" level=info msg="Ran pod sandbox 1cf843e898bced1c3bb6f5bde0e575bb28e07308c777024f04c77080f4447af4 with infra container: default/busybox/POD" id=42a45bce-4373-4da5-b567-347e007e2f44 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:06 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:06.075396007Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d9152782-8631-4569-a010-9af1828cab18 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:06 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:06.075576272Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d9152782-8631-4569-a010-9af1828cab18 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:06 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:06.075630121Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d9152782-8631-4569-a010-9af1828cab18 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:06 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:06.07670293Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8495061-36e1-4829-8a75-96fe7721b664 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:21:06 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:06.081565628Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:21:08 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:08.171209816Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f8495061-36e1-4829-8a75-96fe7721b664 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:21:08 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:08.172194617Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9f4fb3fb-def1-4a9f-823e-2b962d4692b0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:08 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:08.173926971Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cdc6689c-9754-4f3d-85c2-ee96c1d22126 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:08 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:08.177578023Z" level=info msg="Creating container: default/busybox/busybox" id=346861db-9946-4c5b-9e11-6b2572ec237e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:08 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:08.177750997Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:08 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:08.18210608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:08 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:08.182619358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:08 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:08.203018481Z" level=info msg="Created container 454ec93da908f1b47197c954d8febb2f5485c86bd002383a8dffd1840aa7d7c5: default/busybox/busybox" id=346861db-9946-4c5b-9e11-6b2572ec237e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:08 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:08.204060794Z" level=info msg="Starting container: 454ec93da908f1b47197c954d8febb2f5485c86bd002383a8dffd1840aa7d7c5" id=688874c2-5551-48d8-9bf0-9c3ccd4e9560 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:21:08 default-k8s-diff-port-535119 crio[775]: time="2025-11-01T10:21:08.206800144Z" level=info msg="Started container" PID=1918 containerID=454ec93da908f1b47197c954d8febb2f5485c86bd002383a8dffd1840aa7d7c5 description=default/busybox/busybox id=688874c2-5551-48d8-9bf0-9c3ccd4e9560 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cf843e898bced1c3bb6f5bde0e575bb28e07308c777024f04c77080f4447af4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	454ec93da908f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   1cf843e898bce       busybox                                                default
	abc9071641bea       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   291c5ea23e10f       coredns-66bc5c9577-c4s2q                               kube-system
	09b835597b801       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   5f8e79b38e1ea       storage-provisioner                                    kube-system
	13f3979fa421d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   9a621388202ac       kube-proxy-6tl8q                                       kube-system
	2cea63fd8ee7f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      25 seconds ago      Running             kindnet-cni               0                   c8aeaeeac6fce       kindnet-fvr2t                                          kube-system
	f9e82743272d8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      36 seconds ago      Running             kube-controller-manager   0                   e1b6e4108a3d9       kube-controller-manager-default-k8s-diff-port-535119   kube-system
	af13b5e8e57dd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      36 seconds ago      Running             kube-scheduler            0                   f6d474e6a7f3a       kube-scheduler-default-k8s-diff-port-535119            kube-system
	10b10165b26bf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      36 seconds ago      Running             etcd                      0                   56db0cb2d08e8       etcd-default-k8s-diff-port-535119                      kube-system
	64a52bfc3fa17       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      36 seconds ago      Running             kube-apiserver            0                   98de4268817fd       kube-apiserver-default-k8s-diff-port-535119            kube-system
	
	
	==> coredns [abc9071641bea332b4abde81670ff54a9c7862acfcee7167c8768ae83eeaddb2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44049 - 64988 "HINFO IN 1168023948754487103.3872441140745170797. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037518194s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-535119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-535119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=default-k8s-diff-port-535119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_20_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:20:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-535119
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:21:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:21:16 +0000   Sat, 01 Nov 2025 10:20:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:21:16 +0000   Sat, 01 Nov 2025 10:20:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:21:16 +0000   Sat, 01 Nov 2025 10:20:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:21:16 +0000   Sat, 01 Nov 2025 10:21:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-535119
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a6fa098f-22f7-43f7-a2bd-0a700ca3d7aa
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-c4s2q                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-535119                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-fvr2t                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-535119             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-535119    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-6tl8q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-535119             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node default-k8s-diff-port-535119 event: Registered Node default-k8s-diff-port-535119 in Controller
	  Normal  NodeReady                14s   kubelet          Node default-k8s-diff-port-535119 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [10b10165b26bf266ffe46e5fda30ee4daaa021d1e2831b0e13286839a63b3e3e] <==
	{"level":"warn","ts":"2025-11-01T10:20:42.013074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.020726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.033109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.039639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.051905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.060784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.070606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.077264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.091035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.098128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.106554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.115666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.124104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.132287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.148802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.157328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.176446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.185199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.194473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.212002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.219229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.238077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.246083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.256593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:20:42.312590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35022","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:21:16 up  3:03,  0 user,  load average: 4.22, 3.70, 2.89
	Linux default-k8s-diff-port-535119 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2cea63fd8ee7fd49251eb855af40d5ce1c5a804bc5565db2955d9da36180c54e] <==
	I1101 10:20:51.636524       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:20:51.636985       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:20:51.637125       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:20:51.637202       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:20:51.637235       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:20:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:20:51.840892       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:20:51.841535       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:20:51.841561       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:20:51.841700       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:20:52.234940       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:20:52.234970       1 metrics.go:72] Registering metrics
	I1101 10:20:52.235025       1 controller.go:711] "Syncing nftables rules"
	I1101 10:21:01.845282       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:21:01.845361       1 main.go:301] handling current node
	I1101 10:21:11.841531       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:21:11.841584       1 main.go:301] handling current node
	
	
	==> kube-apiserver [64a52bfc3fa17019a13263747cba2121851b06e6e99df698b4046025f5bf790c] <==
	E1101 10:20:42.934096       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1101 10:20:42.982223       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:20:42.987755       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:20:42.987760       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:20:42.996975       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:20:42.997540       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:20:43.015277       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:20:43.791133       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:20:43.794945       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:20:43.794965       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:20:44.343975       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:20:44.387812       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:20:44.491681       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:20:44.498910       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 10:20:44.500480       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:20:44.505402       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:20:44.852499       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:20:45.454890       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:20:45.465473       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:20:45.474963       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:20:49.907444       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:20:49.911490       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:20:50.704149       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:20:50.954149       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 10:21:14.885315       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:45162: use of closed network connection
	
	
	==> kube-controller-manager [f9e82743272d85755791ef9b30a1176e606e6f0ff11f1a0d7c14a185ee834859] <==
	I1101 10:20:49.843541       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:20:49.850417       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:20:49.851722       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:20:49.851750       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:20:49.851781       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:20:49.851804       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:20:49.851867       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:20:49.851885       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:20:49.851907       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:20:49.851869       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:20:49.851874       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:20:49.851755       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:20:49.852004       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-535119"
	I1101 10:20:49.852049       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:20:49.852124       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:20:49.852272       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:20:49.852307       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:20:49.852326       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:20:49.852395       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:20:49.852664       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:20:49.856732       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:20:49.856744       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:20:49.857911       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:20:49.876157       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:21:04.854727       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [13f3979fa421d0cb86c68df80ee7bd87df48482d2ef16d95b7c41740d3d679f4] <==
	I1101 10:20:51.462073       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:20:51.521435       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:20:51.621636       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:20:51.621689       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:20:51.621805       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:20:51.649959       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:20:51.650055       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:20:51.657816       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:20:51.658321       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:20:51.658378       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:20:51.660380       1 config.go:200] "Starting service config controller"
	I1101 10:20:51.660397       1 config.go:309] "Starting node config controller"
	I1101 10:20:51.660409       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:20:51.660422       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:20:51.660400       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:20:51.660434       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:20:51.660442       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:20:51.660424       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:20:51.660454       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:20:51.760918       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:20:51.760951       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:20:51.760928       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [af13b5e8e57dd9b8986f922c28d56d8903a0f39c458c80073d985fa4c94d0be4] <==
	E1101 10:20:42.883209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:20:42.883216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:20:42.883262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:20:42.883310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:20:42.883330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:20:42.883395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:20:42.883412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:20:42.883549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:20:42.883621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:20:42.883662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:20:43.710897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:20:43.723135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:20:43.788381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:20:43.814633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:20:43.930666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:20:43.941982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:20:44.011509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:20:44.032867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:20:44.039137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:20:44.052530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:20:44.054596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:20:44.090142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:20:44.100261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:20:44.374699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 10:20:46.077939       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:20:46 default-k8s-diff-port-535119 kubelet[1318]: E1101 10:20:46.340896    1318 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-535119\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-535119"
	Nov 01 10:20:46 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:46.358480    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-535119" podStartSLOduration=1.358442862 podStartE2EDuration="1.358442862s" podCreationTimestamp="2025-11-01 10:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:46.358407582 +0000 UTC m=+1.142043088" watchObservedRunningTime="2025-11-01 10:20:46.358442862 +0000 UTC m=+1.142078361"
	Nov 01 10:20:46 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:46.358689    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-535119" podStartSLOduration=1.358676602 podStartE2EDuration="1.358676602s" podCreationTimestamp="2025-11-01 10:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:46.349443122 +0000 UTC m=+1.133078626" watchObservedRunningTime="2025-11-01 10:20:46.358676602 +0000 UTC m=+1.142312127"
	Nov 01 10:20:46 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:46.378913    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-535119" podStartSLOduration=1.378890487 podStartE2EDuration="1.378890487s" podCreationTimestamp="2025-11-01 10:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:46.368348403 +0000 UTC m=+1.151983931" watchObservedRunningTime="2025-11-01 10:20:46.378890487 +0000 UTC m=+1.162525992"
	Nov 01 10:20:49 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:49.843684    1318 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:20:49 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:49.844539    1318 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:20:51 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:51.036423    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5d392ec-6526-4597-bab4-fc7eb2bcc8d6-lib-modules\") pod \"kindnet-fvr2t\" (UID: \"b5d392ec-6526-4597-bab4-fc7eb2bcc8d6\") " pod="kube-system/kindnet-fvr2t"
	Nov 01 10:20:51 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:51.036486    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0824ce9-1334-4605-8a30-e08a4e3f8611-xtables-lock\") pod \"kube-proxy-6tl8q\" (UID: \"f0824ce9-1334-4605-8a30-e08a4e3f8611\") " pod="kube-system/kube-proxy-6tl8q"
	Nov 01 10:20:51 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:51.036518    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0824ce9-1334-4605-8a30-e08a4e3f8611-lib-modules\") pod \"kube-proxy-6tl8q\" (UID: \"f0824ce9-1334-4605-8a30-e08a4e3f8611\") " pod="kube-system/kube-proxy-6tl8q"
	Nov 01 10:20:51 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:51.036550    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktfws\" (UniqueName: \"kubernetes.io/projected/f0824ce9-1334-4605-8a30-e08a4e3f8611-kube-api-access-ktfws\") pod \"kube-proxy-6tl8q\" (UID: \"f0824ce9-1334-4605-8a30-e08a4e3f8611\") " pod="kube-system/kube-proxy-6tl8q"
	Nov 01 10:20:51 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:51.036578    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5d392ec-6526-4597-bab4-fc7eb2bcc8d6-xtables-lock\") pod \"kindnet-fvr2t\" (UID: \"b5d392ec-6526-4597-bab4-fc7eb2bcc8d6\") " pod="kube-system/kindnet-fvr2t"
	Nov 01 10:20:51 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:51.036598    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqmkp\" (UniqueName: \"kubernetes.io/projected/b5d392ec-6526-4597-bab4-fc7eb2bcc8d6-kube-api-access-cqmkp\") pod \"kindnet-fvr2t\" (UID: \"b5d392ec-6526-4597-bab4-fc7eb2bcc8d6\") " pod="kube-system/kindnet-fvr2t"
	Nov 01 10:20:51 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:51.036624    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b5d392ec-6526-4597-bab4-fc7eb2bcc8d6-cni-cfg\") pod \"kindnet-fvr2t\" (UID: \"b5d392ec-6526-4597-bab4-fc7eb2bcc8d6\") " pod="kube-system/kindnet-fvr2t"
	Nov 01 10:20:51 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:51.036646    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f0824ce9-1334-4605-8a30-e08a4e3f8611-kube-proxy\") pod \"kube-proxy-6tl8q\" (UID: \"f0824ce9-1334-4605-8a30-e08a4e3f8611\") " pod="kube-system/kube-proxy-6tl8q"
	Nov 01 10:20:52 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:52.363664    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6tl8q" podStartSLOduration=2.363641764 podStartE2EDuration="2.363641764s" podCreationTimestamp="2025-11-01 10:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:52.363418982 +0000 UTC m=+7.147054486" watchObservedRunningTime="2025-11-01 10:20:52.363641764 +0000 UTC m=+7.147277267"
	Nov 01 10:20:52 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:20:52.838170    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-fvr2t" podStartSLOduration=2.838144819 podStartE2EDuration="2.838144819s" podCreationTimestamp="2025-11-01 10:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:52.374637435 +0000 UTC m=+7.158272931" watchObservedRunningTime="2025-11-01 10:20:52.838144819 +0000 UTC m=+7.621780323"
	Nov 01 10:21:02 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:21:02.242289    1318 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:21:02 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:21:02.320447    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf2j5\" (UniqueName: \"kubernetes.io/projected/be187f6d-2e2b-40fd-b9d2-1347371705d6-kube-api-access-bf2j5\") pod \"storage-provisioner\" (UID: \"be187f6d-2e2b-40fd-b9d2-1347371705d6\") " pod="kube-system/storage-provisioner"
	Nov 01 10:21:02 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:21:02.320503    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2ce5134-6cd5-4a6b-93c4-3ee710006677-config-volume\") pod \"coredns-66bc5c9577-c4s2q\" (UID: \"e2ce5134-6cd5-4a6b-93c4-3ee710006677\") " pod="kube-system/coredns-66bc5c9577-c4s2q"
	Nov 01 10:21:02 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:21:02.320527    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9r9c\" (UniqueName: \"kubernetes.io/projected/e2ce5134-6cd5-4a6b-93c4-3ee710006677-kube-api-access-k9r9c\") pod \"coredns-66bc5c9577-c4s2q\" (UID: \"e2ce5134-6cd5-4a6b-93c4-3ee710006677\") " pod="kube-system/coredns-66bc5c9577-c4s2q"
	Nov 01 10:21:02 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:21:02.320557    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/be187f6d-2e2b-40fd-b9d2-1347371705d6-tmp\") pod \"storage-provisioner\" (UID: \"be187f6d-2e2b-40fd-b9d2-1347371705d6\") " pod="kube-system/storage-provisioner"
	Nov 01 10:21:03 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:21:03.406492    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-c4s2q" podStartSLOduration=12.406466741 podStartE2EDuration="12.406466741s" podCreationTimestamp="2025-11-01 10:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:21:03.406350091 +0000 UTC m=+18.189985607" watchObservedRunningTime="2025-11-01 10:21:03.406466741 +0000 UTC m=+18.190102245"
	Nov 01 10:21:03 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:21:03.406640    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.406630224 podStartE2EDuration="12.406630224s" podCreationTimestamp="2025-11-01 10:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:21:03.39169409 +0000 UTC m=+18.175329585" watchObservedRunningTime="2025-11-01 10:21:03.406630224 +0000 UTC m=+18.190265727"
	Nov 01 10:21:05 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:21:05.843116    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkfcl\" (UniqueName: \"kubernetes.io/projected/cae18218-eb25-4d8d-ba04-f9e73dda2131-kube-api-access-xkfcl\") pod \"busybox\" (UID: \"cae18218-eb25-4d8d-ba04-f9e73dda2131\") " pod="default/busybox"
	Nov 01 10:21:08 default-k8s-diff-port-535119 kubelet[1318]: I1101 10:21:08.406911    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.309773389 podStartE2EDuration="3.406881743s" podCreationTimestamp="2025-11-01 10:21:05 +0000 UTC" firstStartedPulling="2025-11-01 10:21:06.076129035 +0000 UTC m=+20.859764519" lastFinishedPulling="2025-11-01 10:21:08.173237386 +0000 UTC m=+22.956872873" observedRunningTime="2025-11-01 10:21:08.40648109 +0000 UTC m=+23.190116594" watchObservedRunningTime="2025-11-01 10:21:08.406881743 +0000 UTC m=+23.190517247"
	
	
	==> storage-provisioner [09b835597b801f6d89b36a3de03d74b81d896a9da36eaa8568705349637eb075] <==
	I1101 10:21:02.653606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:21:02.666207       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:21:02.666268       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:21:02.669177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:02.676789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:21:02.677053       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:21:02.677194       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ac9104e3-50b9-4617-bb83-1ca4dc037b6d", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-535119_2868e355-1cca-45ba-a992-d7007865555e became leader
	I1101 10:21:02.677279       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-535119_2868e355-1cca-45ba-a992-d7007865555e!
	W1101 10:21:02.679670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:02.684294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:21:02.777675       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-535119_2868e355-1cca-45ba-a992-d7007865555e!
	W1101 10:21:04.687962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:04.693821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:06.698498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:06.704572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:08.708748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:08.713806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:10.717369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:10.722459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:12.726038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:12.730384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:14.735280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:14.742686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:16.750462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:16.758076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-535119 -n default-k8s-diff-port-535119
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-535119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-006653 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-006653 --alsologtostderr -v=1: exit status 80 (2.494132705s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-006653 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:21:18.884888  778594 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:21:18.885181  778594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:21:18.885190  778594 out.go:374] Setting ErrFile to fd 2...
	I1101 10:21:18.885194  778594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:21:18.885448  778594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:21:18.885747  778594 out.go:368] Setting JSON to false
	I1101 10:21:18.885792  778594 mustload.go:66] Loading cluster: newest-cni-006653
	I1101 10:21:18.886170  778594 config.go:182] Loaded profile config "newest-cni-006653": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:18.886588  778594 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:18.907480  778594 host.go:66] Checking if "newest-cni-006653" exists ...
	I1101 10:21:18.907806  778594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:21:18.982501  778594 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-01 10:21:18.968704672 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:21:18.983444  778594 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-006653 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:21:18.985455  778594 out.go:179] * Pausing node newest-cni-006653 ... 
	I1101 10:21:18.987053  778594 host.go:66] Checking if "newest-cni-006653" exists ...
	I1101 10:21:18.987412  778594 ssh_runner.go:195] Run: systemctl --version
	I1101 10:21:18.987466  778594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:19.009589  778594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:19.119184  778594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:21:19.135288  778594 pause.go:52] kubelet running: true
	I1101 10:21:19.135378  778594 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:21:19.313713  778594 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:21:19.313827  778594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:21:19.416155  778594 cri.go:89] found id: "3b70b9eba589fbc2df8137342ab90c0de139b42dcd0cdba712add248e0a957fe"
	I1101 10:21:19.416177  778594 cri.go:89] found id: "5f81dc39338faa288b5e42addd10e7486b7d4b85f61aa8fe4077cf9561e1a729"
	I1101 10:21:19.416181  778594 cri.go:89] found id: "7c09ddecdeca46ff3ec1552a8c119fc453d012084c77937d37039c7713b8515b"
	I1101 10:21:19.416184  778594 cri.go:89] found id: "922955453c81342bf231488bc1c4788ba0de975b4453762ada023b741185a144"
	I1101 10:21:19.416188  778594 cri.go:89] found id: "c7f1e1f3c53e69773b4e36a83142cc7f8552cca4f888399d85ba1875b5ebf29f"
	I1101 10:21:19.416192  778594 cri.go:89] found id: "49e471af6c5f092029c6717bae1e37da0b4381d85dfad7b5da552c19d207269c"
	I1101 10:21:19.416196  778594 cri.go:89] found id: ""
	I1101 10:21:19.416234  778594 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:21:19.431045  778594 retry.go:31] will retry after 188.655641ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:21:19Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:21:19.620552  778594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:21:19.635783  778594 pause.go:52] kubelet running: false
	I1101 10:21:19.635858  778594 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:21:19.781534  778594 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:21:19.781642  778594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:21:19.873069  778594 cri.go:89] found id: "3b70b9eba589fbc2df8137342ab90c0de139b42dcd0cdba712add248e0a957fe"
	I1101 10:21:19.873108  778594 cri.go:89] found id: "5f81dc39338faa288b5e42addd10e7486b7d4b85f61aa8fe4077cf9561e1a729"
	I1101 10:21:19.873115  778594 cri.go:89] found id: "7c09ddecdeca46ff3ec1552a8c119fc453d012084c77937d37039c7713b8515b"
	I1101 10:21:19.873121  778594 cri.go:89] found id: "922955453c81342bf231488bc1c4788ba0de975b4453762ada023b741185a144"
	I1101 10:21:19.873126  778594 cri.go:89] found id: "c7f1e1f3c53e69773b4e36a83142cc7f8552cca4f888399d85ba1875b5ebf29f"
	I1101 10:21:19.873132  778594 cri.go:89] found id: "49e471af6c5f092029c6717bae1e37da0b4381d85dfad7b5da552c19d207269c"
	I1101 10:21:19.873137  778594 cri.go:89] found id: ""
	I1101 10:21:19.873209  778594 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:21:19.888428  778594 retry.go:31] will retry after 239.816508ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:21:19Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:21:20.129059  778594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:21:20.143783  778594 pause.go:52] kubelet running: false
	I1101 10:21:20.143858  778594 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:21:20.278772  778594 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:21:20.278900  778594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:21:20.353526  778594 cri.go:89] found id: "3b70b9eba589fbc2df8137342ab90c0de139b42dcd0cdba712add248e0a957fe"
	I1101 10:21:20.353554  778594 cri.go:89] found id: "5f81dc39338faa288b5e42addd10e7486b7d4b85f61aa8fe4077cf9561e1a729"
	I1101 10:21:20.353560  778594 cri.go:89] found id: "7c09ddecdeca46ff3ec1552a8c119fc453d012084c77937d37039c7713b8515b"
	I1101 10:21:20.353566  778594 cri.go:89] found id: "922955453c81342bf231488bc1c4788ba0de975b4453762ada023b741185a144"
	I1101 10:21:20.353571  778594 cri.go:89] found id: "c7f1e1f3c53e69773b4e36a83142cc7f8552cca4f888399d85ba1875b5ebf29f"
	I1101 10:21:20.353576  778594 cri.go:89] found id: "49e471af6c5f092029c6717bae1e37da0b4381d85dfad7b5da552c19d207269c"
	I1101 10:21:20.353580  778594 cri.go:89] found id: ""
	I1101 10:21:20.353630  778594 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:21:20.367615  778594 retry.go:31] will retry after 712.606204ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:21:20Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:21:21.081291  778594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:21:21.095851  778594 pause.go:52] kubelet running: false
	I1101 10:21:21.095933  778594 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:21:21.210586  778594 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:21:21.210665  778594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:21:21.285129  778594 cri.go:89] found id: "3b70b9eba589fbc2df8137342ab90c0de139b42dcd0cdba712add248e0a957fe"
	I1101 10:21:21.285171  778594 cri.go:89] found id: "5f81dc39338faa288b5e42addd10e7486b7d4b85f61aa8fe4077cf9561e1a729"
	I1101 10:21:21.285175  778594 cri.go:89] found id: "7c09ddecdeca46ff3ec1552a8c119fc453d012084c77937d37039c7713b8515b"
	I1101 10:21:21.285179  778594 cri.go:89] found id: "922955453c81342bf231488bc1c4788ba0de975b4453762ada023b741185a144"
	I1101 10:21:21.285181  778594 cri.go:89] found id: "c7f1e1f3c53e69773b4e36a83142cc7f8552cca4f888399d85ba1875b5ebf29f"
	I1101 10:21:21.285185  778594 cri.go:89] found id: "49e471af6c5f092029c6717bae1e37da0b4381d85dfad7b5da552c19d207269c"
	I1101 10:21:21.285187  778594 cri.go:89] found id: ""
	I1101 10:21:21.285231  778594 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:21:21.300506  778594 out.go:203] 
	W1101 10:21:21.301796  778594 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:21:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:21:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:21:21.301820  778594 out.go:285] * 
	* 
	W1101 10:21:21.306115  778594 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:21:21.307254  778594 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-006653 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-006653
helpers_test.go:243: (dbg) docker inspect newest-cni-006653:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64",
	        "Created": "2025-11-01T10:20:40.630212993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 775547,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:21:07.628914075Z",
	            "FinishedAt": "2025-11-01T10:21:06.668479491Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64/hostname",
	        "HostsPath": "/var/lib/docker/containers/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64/hosts",
	        "LogPath": "/var/lib/docker/containers/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64-json.log",
	        "Name": "/newest-cni-006653",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-006653:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-006653",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64",
	                "LowerDir": "/var/lib/docker/overlay2/c10def8fe79d863bddcf542dfd2838cdfe2bb73d219aa8d27f9ddb8feb62b4da-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c10def8fe79d863bddcf542dfd2838cdfe2bb73d219aa8d27f9ddb8feb62b4da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c10def8fe79d863bddcf542dfd2838cdfe2bb73d219aa8d27f9ddb8feb62b4da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c10def8fe79d863bddcf542dfd2838cdfe2bb73d219aa8d27f9ddb8feb62b4da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-006653",
	                "Source": "/var/lib/docker/volumes/newest-cni-006653/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-006653",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-006653",
	                "name.minikube.sigs.k8s.io": "newest-cni-006653",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3037afbf122899f407145aa4bca26f74da21e0b95c3162bde124afc8adb9a15",
	            "SandboxKey": "/var/run/docker/netns/e3037afbf122",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33208"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33209"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33212"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33210"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33211"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-006653": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:d7:93:77:28:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7c02c09c0ce161b2b9f0f4d8dfbab9af05a638642c6978f8142ed5d4368be572",
	                    "EndpointID": "a222b9f83a6e8b9fb089f29b07febc50747ef06c88a9d0da5f06d27858859657",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-006653",
	                        "91a32a4040ae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-006653 -n newest-cni-006653
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-006653 -n newest-cni-006653: exit status 2 (352.833248ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-006653 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-006653 logs -n 25: (1.032115705s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p no-preload-680879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ old-k8s-version-556573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p old-k8s-version-556573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ no-preload-680879 image list --format=json                                                                                                                                                                                                    │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p no-preload-680879 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p embed-certs-678014 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-678014           │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p no-preload-680879                                                                                                                                                                                                                          │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p no-preload-680879                                                                                                                                                                                                                          │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p disable-driver-mounts-083568                                                                                                                                                                                                               │ disable-driver-mounts-083568 │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p default-k8s-diff-port-535119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:21 UTC │
	│ start   │ -p cert-expiration-577441 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-577441       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p cert-expiration-577441                                                                                                                                                                                                                     │ cert-expiration-577441       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p newest-cni-006653 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-006653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	│ stop    │ -p newest-cni-006653 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ addons  │ enable dashboard -p newest-cni-006653 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ start   │ -p newest-cni-006653 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-535119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-535119 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	│ image   │ newest-cni-006653 image list --format=json                                                                                                                                                                                                    │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ pause   │ -p newest-cni-006653 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:21:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:21:07.368818  775345 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:21:07.368991  775345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:21:07.369005  775345 out.go:374] Setting ErrFile to fd 2...
	I1101 10:21:07.369011  775345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:21:07.369282  775345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:21:07.369804  775345 out.go:368] Setting JSON to false
	I1101 10:21:07.372138  775345 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11004,"bootTime":1761981463,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:21:07.372273  775345 start.go:143] virtualization: kvm guest
	I1101 10:21:07.374034  775345 out.go:179] * [newest-cni-006653] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:21:07.375251  775345 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:21:07.375276  775345 notify.go:221] Checking for updates...
	I1101 10:21:07.377188  775345 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:21:07.378236  775345 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:21:07.379230  775345 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:21:07.380231  775345 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:21:07.381285  775345 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:21:07.382730  775345 config.go:182] Loaded profile config "newest-cni-006653": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:07.383283  775345 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:21:07.412824  775345 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:21:07.413000  775345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:21:07.477031  775345 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:21:07.464068959 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:21:07.477164  775345 docker.go:319] overlay module found
	I1101 10:21:07.479294  775345 out.go:179] * Using the docker driver based on existing profile
	I1101 10:21:07.480228  775345 start.go:309] selected driver: docker
	I1101 10:21:07.480246  775345 start.go:930] validating driver "docker" against &{Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:21:07.480361  775345 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:21:07.481141  775345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:21:07.547108  775345 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:21:07.535480294 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:21:07.547439  775345 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:21:07.547475  775345 cni.go:84] Creating CNI manager for ""
	I1101 10:21:07.547541  775345 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:21:07.547651  775345 start.go:353] cluster config:
	{Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:21:07.549748  775345 out.go:179] * Starting "newest-cni-006653" primary control-plane node in "newest-cni-006653" cluster
	I1101 10:21:07.550569  775345 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:21:07.551613  775345 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:21:07.552531  775345 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:21:07.552589  775345 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:21:07.552604  775345 cache.go:59] Caching tarball of preloaded images
	I1101 10:21:07.552645  775345 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:21:07.552722  775345 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:21:07.552741  775345 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:21:07.552950  775345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/config.json ...
	I1101 10:21:07.577411  775345 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:21:07.577438  775345 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:21:07.577479  775345 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:21:07.577518  775345 start.go:360] acquireMachinesLock for newest-cni-006653: {Name:mkf496d0b80c7855406646357bd774886a0856a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:21:07.577606  775345 start.go:364] duration metric: took 56.04µs to acquireMachinesLock for "newest-cni-006653"
	I1101 10:21:07.577634  775345 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:21:07.577646  775345 fix.go:54] fixHost starting: 
	I1101 10:21:07.577966  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:07.598527  775345 fix.go:112] recreateIfNeeded on newest-cni-006653: state=Stopped err=<nil>
	W1101 10:21:07.598568  775345 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:21:05.475588  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	W1101 10:21:07.475900  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	I1101 10:21:06.515156  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:06.516003  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:06.516072  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:06.516133  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:06.558213  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:06.558244  734517 cri.go:89] found id: ""
	I1101 10:21:06.558259  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:06.558332  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:06.564141  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:06.564236  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:06.600088  734517 cri.go:89] found id: ""
	I1101 10:21:06.600122  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.600134  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:06.600142  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:06.600216  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:06.638676  734517 cri.go:89] found id: ""
	I1101 10:21:06.638722  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.638734  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:06.638744  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:06.638815  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:06.676103  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:06.676133  734517 cri.go:89] found id: ""
	I1101 10:21:06.676144  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:06.676203  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:06.681722  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:06.681799  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:06.719509  734517 cri.go:89] found id: ""
	I1101 10:21:06.719543  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.719554  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:06.719563  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:06.719637  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:06.752396  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:06.752533  734517 cri.go:89] found id: ""
	I1101 10:21:06.752545  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:06.752603  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:06.757697  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:06.757763  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:06.790052  734517 cri.go:89] found id: ""
	I1101 10:21:06.790091  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.790103  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:06.790113  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:06.790186  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:21:06.821398  734517 cri.go:89] found id: ""
	I1101 10:21:06.821436  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.821450  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:21:06.821475  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:21:06.821495  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:06.853392  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:21:06.853425  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:21:06.912616  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:21:06.912661  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:21:06.947720  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:21:06.947759  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:21:07.058980  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:21:07.059023  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:21:07.080200  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:21:07.080238  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:21:07.150168  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:21:07.150197  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:21:07.150221  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:07.191996  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:21:07.192035  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:09.754304  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:09.754761  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:09.754823  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:09.754892  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:09.787037  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:09.787065  734517 cri.go:89] found id: ""
	I1101 10:21:09.787074  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:09.787139  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:09.791637  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:09.791724  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:09.822734  734517 cri.go:89] found id: ""
	I1101 10:21:09.822762  734517 logs.go:282] 0 containers: []
	W1101 10:21:09.822772  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:09.822778  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:09.822827  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:07.600177  775345 out.go:252] * Restarting existing docker container for "newest-cni-006653" ...
	I1101 10:21:07.600261  775345 cli_runner.go:164] Run: docker start newest-cni-006653
	I1101 10:21:07.887226  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:07.907473  775345 kic.go:430] container "newest-cni-006653" state is running.
	I1101 10:21:07.908052  775345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-006653
	I1101 10:21:07.930273  775345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/config.json ...
	I1101 10:21:07.930608  775345 machine.go:94] provisionDockerMachine start ...
	I1101 10:21:07.930697  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:07.950868  775345 main.go:143] libmachine: Using SSH client type: native
	I1101 10:21:07.951192  775345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1101 10:21:07.951214  775345 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:21:07.951934  775345 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59284->127.0.0.1:33208: read: connection reset by peer
	I1101 10:21:11.101533  775345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-006653
	
	I1101 10:21:11.101567  775345 ubuntu.go:182] provisioning hostname "newest-cni-006653"
	I1101 10:21:11.101627  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:11.121992  775345 main.go:143] libmachine: Using SSH client type: native
	I1101 10:21:11.122272  775345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1101 10:21:11.122293  775345 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-006653 && echo "newest-cni-006653" | sudo tee /etc/hostname
	I1101 10:21:11.278332  775345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-006653
	
	I1101 10:21:11.278417  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:11.298025  775345 main.go:143] libmachine: Using SSH client type: native
	I1101 10:21:11.298366  775345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1101 10:21:11.298396  775345 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-006653' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-006653/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-006653' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:21:11.446407  775345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:21:11.446446  775345 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:21:11.446476  775345 ubuntu.go:190] setting up certificates
	I1101 10:21:11.446494  775345 provision.go:84] configureAuth start
	I1101 10:21:11.446585  775345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-006653
	I1101 10:21:11.467021  775345 provision.go:143] copyHostCerts
	I1101 10:21:11.467089  775345 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:21:11.467107  775345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:21:11.467188  775345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:21:11.467319  775345 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:21:11.467328  775345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:21:11.467356  775345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:21:11.467431  775345 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:21:11.467438  775345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:21:11.467464  775345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:21:11.467535  775345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.newest-cni-006653 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-006653]
	I1101 10:21:11.656041  775345 provision.go:177] copyRemoteCerts
	I1101 10:21:11.656114  775345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:21:11.656155  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:11.675562  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:11.780483  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:21:11.801492  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:21:11.822639  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:21:11.844599  775345 provision.go:87] duration metric: took 398.086986ms to configureAuth
	I1101 10:21:11.844629  775345 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:21:11.844827  775345 config.go:182] Loaded profile config "newest-cni-006653": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:11.844986  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:11.865032  775345 main.go:143] libmachine: Using SSH client type: native
	I1101 10:21:11.865396  775345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1101 10:21:11.865423  775345 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:21:12.151927  775345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:21:12.151959  775345 machine.go:97] duration metric: took 4.221331346s to provisionDockerMachine
	I1101 10:21:12.151974  775345 start.go:293] postStartSetup for "newest-cni-006653" (driver="docker")
	I1101 10:21:12.151984  775345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:21:12.152046  775345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:21:12.152087  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:12.172073  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:12.276880  775345 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:21:12.281085  775345 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:21:12.281117  775345 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:21:12.281130  775345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:21:12.281178  775345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:21:12.281267  775345 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:21:12.281363  775345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:21:12.289865  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:21:12.310993  775345 start.go:296] duration metric: took 159.002326ms for postStartSetup
	I1101 10:21:12.311102  775345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:21:12.311149  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:12.330337  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	W1101 10:21:09.974921  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	W1101 10:21:12.475062  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	I1101 10:21:12.430860  775345 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:21:12.436672  775345 fix.go:56] duration metric: took 4.859015473s for fixHost
	I1101 10:21:12.436705  775345 start.go:83] releasing machines lock for "newest-cni-006653", held for 4.859082301s
	I1101 10:21:12.436786  775345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-006653
	I1101 10:21:12.456783  775345 ssh_runner.go:195] Run: cat /version.json
	I1101 10:21:12.456896  775345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:21:12.456902  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:12.457005  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:12.477799  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:12.478095  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:12.637349  775345 ssh_runner.go:195] Run: systemctl --version
	I1101 10:21:12.645138  775345 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:21:12.685879  775345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:21:12.691371  775345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:21:12.691434  775345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:21:12.700901  775345 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:21:12.700930  775345 start.go:496] detecting cgroup driver to use...
	I1101 10:21:12.700976  775345 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:21:12.701037  775345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:21:12.717316  775345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:21:12.733635  775345 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:21:12.733689  775345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:21:12.750497  775345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:21:12.767331  775345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:21:12.854808  775345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:21:12.938672  775345 docker.go:234] disabling docker service ...
	I1101 10:21:12.938746  775345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:21:12.957137  775345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:21:12.972571  775345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:21:13.074081  775345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:21:13.169823  775345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:21:13.184846  775345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:21:13.204139  775345 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:21:13.204216  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.215765  775345 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:21:13.215867  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.227103  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.238022  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.249272  775345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:21:13.259995  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.271255  775345 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.282311  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.294977  775345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:21:13.304502  775345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:21:13.313752  775345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:21:13.405995  775345 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:21:13.532643  775345 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:21:13.532727  775345 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:21:13.537752  775345 start.go:564] Will wait 60s for crictl version
	I1101 10:21:13.537818  775345 ssh_runner.go:195] Run: which crictl
	I1101 10:21:13.541787  775345 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:21:13.571974  775345 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:21:13.572085  775345 ssh_runner.go:195] Run: crio --version
	I1101 10:21:13.608295  775345 ssh_runner.go:195] Run: crio --version
	I1101 10:21:13.643017  775345 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:21:13.643996  775345 cli_runner.go:164] Run: docker network inspect newest-cni-006653 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:21:13.662889  775345 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:21:13.667996  775345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:21:13.681178  775345 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:21:09.860041  734517 cri.go:89] found id: ""
	I1101 10:21:09.860070  734517 logs.go:282] 0 containers: []
	W1101 10:21:09.860080  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:09.860089  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:09.860142  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:09.890661  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:09.890692  734517 cri.go:89] found id: ""
	I1101 10:21:09.890705  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:09.890778  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:09.895701  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:09.895778  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:09.927449  734517 cri.go:89] found id: ""
	I1101 10:21:09.927477  734517 logs.go:282] 0 containers: []
	W1101 10:21:09.927488  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:09.927505  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:09.927570  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:09.959698  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:09.959729  734517 cri.go:89] found id: ""
	I1101 10:21:09.959742  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:09.959803  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:09.964405  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:09.964502  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:09.995953  734517 cri.go:89] found id: ""
	I1101 10:21:09.995991  734517 logs.go:282] 0 containers: []
	W1101 10:21:09.996004  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:09.996015  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:09.996073  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:21:10.030085  734517 cri.go:89] found id: ""
	I1101 10:21:10.030117  734517 logs.go:282] 0 containers: []
	W1101 10:21:10.030126  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:21:10.030139  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:21:10.030154  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:10.060407  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:21:10.060441  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:21:10.117644  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:21:10.117690  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:21:10.152178  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:21:10.152207  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:21:10.242540  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:21:10.242598  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:21:10.263401  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:21:10.263441  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:21:10.324595  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:21:10.324617  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:21:10.324633  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:10.362674  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:21:10.362718  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:12.922943  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:12.923478  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:12.923551  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:12.923612  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:12.957773  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:12.957793  734517 cri.go:89] found id: ""
	I1101 10:21:12.957801  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:12.957878  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:12.962381  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:12.962483  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:12.995296  734517 cri.go:89] found id: ""
	I1101 10:21:12.995333  734517 logs.go:282] 0 containers: []
	W1101 10:21:12.995344  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:12.995352  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:12.995430  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:13.033380  734517 cri.go:89] found id: ""
	I1101 10:21:13.033414  734517 logs.go:282] 0 containers: []
	W1101 10:21:13.033426  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:13.033435  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:13.033506  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:13.064948  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:13.064970  734517 cri.go:89] found id: ""
	I1101 10:21:13.064979  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:13.065041  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:13.069789  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:13.069887  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:13.100580  734517 cri.go:89] found id: ""
	I1101 10:21:13.100614  734517 logs.go:282] 0 containers: []
	W1101 10:21:13.100626  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:13.100635  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:13.100686  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:13.136326  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:13.136359  734517 cri.go:89] found id: ""
	I1101 10:21:13.136370  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:13.136429  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:13.141519  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:13.141623  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:13.174096  734517 cri.go:89] found id: ""
	I1101 10:21:13.174121  734517 logs.go:282] 0 containers: []
	W1101 10:21:13.174130  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:13.174137  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:13.174185  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:21:13.207618  734517 cri.go:89] found id: ""
	I1101 10:21:13.207650  734517 logs.go:282] 0 containers: []
	W1101 10:21:13.207662  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:21:13.207676  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:21:13.207692  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:21:13.228225  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:21:13.228269  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:21:13.296888  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:21:13.296924  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:21:13.296945  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:13.334981  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:21:13.335028  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:13.397890  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:21:13.397936  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:13.430702  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:21:13.430732  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:21:13.495394  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:21:13.495444  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:21:13.533429  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:21:13.533456  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:21:13.682134  775345 kubeadm.go:884] updating cluster {Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:21:13.682285  775345 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:21:13.682351  775345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:21:13.719917  775345 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:21:13.719941  775345 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:21:13.719997  775345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:21:13.749397  775345 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:21:13.749421  775345 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:21:13.749429  775345 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:21:13.749550  775345 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-006653 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:21:13.749653  775345 ssh_runner.go:195] Run: crio config
	I1101 10:21:13.802432  775345 cni.go:84] Creating CNI manager for ""
	I1101 10:21:13.802462  775345 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:21:13.802489  775345 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:21:13.802551  775345 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-006653 NodeName:newest-cni-006653 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:21:13.802705  775345 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-006653"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:21:13.802774  775345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:21:13.812295  775345 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:21:13.812378  775345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:21:13.821815  775345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:21:13.837568  775345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:21:13.852297  775345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1101 10:21:13.866722  775345 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:21:13.871100  775345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:21:13.882942  775345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:21:13.967554  775345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:21:13.993768  775345 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653 for IP: 192.168.76.2
	I1101 10:21:13.993792  775345 certs.go:195] generating shared ca certs ...
	I1101 10:21:13.993815  775345 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:13.994012  775345 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:21:13.994053  775345 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:21:13.994061  775345 certs.go:257] generating profile certs ...
	I1101 10:21:13.994169  775345 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/client.key
	I1101 10:21:13.994235  775345 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.key.c43daf58
	I1101 10:21:13.994270  775345 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.key
	I1101 10:21:13.994378  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:21:13.994412  775345 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:21:13.994422  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:21:13.994446  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:21:13.994467  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:21:13.994494  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:21:13.994533  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:21:13.995177  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:21:14.017811  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:21:14.041370  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:21:14.063070  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:21:14.090442  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:21:14.111563  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:21:14.132592  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:21:14.152885  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:21:14.173513  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:21:14.194543  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:21:14.215737  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:21:14.237400  775345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:21:14.252487  775345 ssh_runner.go:195] Run: openssl version
	I1101 10:21:14.260121  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:21:14.271081  775345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:21:14.276116  775345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:21:14.276186  775345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:21:14.313235  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:21:14.323271  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:21:14.334255  775345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:21:14.339072  775345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:21:14.339149  775345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:21:14.377267  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:21:14.387359  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:21:14.398061  775345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:21:14.402635  775345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:21:14.402717  775345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:21:14.440665  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:21:14.451644  775345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:21:14.456568  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:21:14.497718  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:21:14.545689  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:21:14.597289  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:21:14.650890  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:21:14.703137  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:21:14.742240  775345 kubeadm.go:401] StartCluster: {Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:21:14.742382  775345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:21:14.742487  775345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:21:14.779439  775345 cri.go:89] found id: "7c09ddecdeca46ff3ec1552a8c119fc453d012084c77937d37039c7713b8515b"
	I1101 10:21:14.779467  775345 cri.go:89] found id: "922955453c81342bf231488bc1c4788ba0de975b4453762ada023b741185a144"
	I1101 10:21:14.779473  775345 cri.go:89] found id: "c7f1e1f3c53e69773b4e36a83142cc7f8552cca4f888399d85ba1875b5ebf29f"
	I1101 10:21:14.779477  775345 cri.go:89] found id: "49e471af6c5f092029c6717bae1e37da0b4381d85dfad7b5da552c19d207269c"
	I1101 10:21:14.779495  775345 cri.go:89] found id: ""
	I1101 10:21:14.779547  775345 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:21:14.798690  775345 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:21:14Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:21:14.798775  775345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:21:14.810055  775345 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:21:14.810075  775345 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:21:14.810127  775345 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:21:14.821271  775345 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:21:14.822995  775345 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-006653" does not appear in /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:21:14.823931  775345 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-514161/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-006653" cluster setting kubeconfig missing "newest-cni-006653" context setting]
	I1101 10:21:14.825362  775345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:14.828027  775345 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:21:14.840116  775345 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:21:14.840164  775345 kubeadm.go:602] duration metric: took 30.082653ms to restartPrimaryControlPlane
	I1101 10:21:14.840178  775345 kubeadm.go:403] duration metric: took 97.950111ms to StartCluster
	I1101 10:21:14.840202  775345 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:14.840292  775345 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:21:14.842793  775345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:14.843615  775345 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:21:14.843831  775345 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:21:14.843950  775345 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-006653"
	I1101 10:21:14.843973  775345 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-006653"
	W1101 10:21:14.843985  775345 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:21:14.844018  775345 host.go:66] Checking if "newest-cni-006653" exists ...
	I1101 10:21:14.844087  775345 addons.go:70] Setting dashboard=true in profile "newest-cni-006653"
	I1101 10:21:14.844108  775345 addons.go:239] Setting addon dashboard=true in "newest-cni-006653"
	W1101 10:21:14.844115  775345 addons.go:248] addon dashboard should already be in state true
	I1101 10:21:14.844139  775345 host.go:66] Checking if "newest-cni-006653" exists ...
	I1101 10:21:14.843915  775345 config.go:182] Loaded profile config "newest-cni-006653": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:14.844318  775345 addons.go:70] Setting default-storageclass=true in profile "newest-cni-006653"
	I1101 10:21:14.844352  775345 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-006653"
	I1101 10:21:14.844561  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:14.844561  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:14.844727  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:14.847357  775345 out.go:179] * Verifying Kubernetes components...
	I1101 10:21:14.849672  775345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:21:14.877374  775345 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:21:14.878757  775345 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:21:14.878783  775345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:21:14.878934  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:14.879347  775345 addons.go:239] Setting addon default-storageclass=true in "newest-cni-006653"
	W1101 10:21:14.879369  775345 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:21:14.879400  775345 host.go:66] Checking if "newest-cni-006653" exists ...
	I1101 10:21:14.879894  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:14.883519  775345 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:21:14.884583  775345 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:21:14.885685  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:21:14.885714  775345 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:21:14.885798  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:14.913363  775345 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:21:14.913437  775345 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:21:14.913516  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:14.920058  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:14.928765  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:14.950655  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:15.031938  775345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:21:15.051569  775345 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:21:15.051678  775345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:21:15.053736  775345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:21:15.062921  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:21:15.062952  775345 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:21:15.068552  775345 api_server.go:72] duration metric: took 224.885945ms to wait for apiserver process to appear ...
	I1101 10:21:15.069778  775345 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:21:15.069830  775345 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:21:15.081144  775345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:21:15.083359  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:21:15.083385  775345 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:21:15.101356  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:21:15.101388  775345 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:21:15.128679  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:21:15.128710  775345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:21:15.146757  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:21:15.146787  775345 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:21:15.165586  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:21:15.165620  775345 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:21:15.182614  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:21:15.182646  775345 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:21:15.201788  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:21:15.201820  775345 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:21:15.218759  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:21:15.218792  775345 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:21:15.240068  775345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:21:16.444318  775345 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 10:21:16.444367  775345 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 10:21:16.444389  775345 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:21:16.456535  775345 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 10:21:16.456566  775345 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 10:21:16.570120  775345 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:21:16.579924  775345 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:21:16.579969  775345 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:21:17.070474  775345 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:21:17.078357  775345 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:21:17.078395  775345 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:21:17.158635  775345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.104857927s)
	I1101 10:21:17.158701  775345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.077520733s)
	I1101 10:21:17.158895  775345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.918742923s)
	I1101 10:21:17.160236  775345 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-006653 addons enable metrics-server
	
	I1101 10:21:17.172330  775345 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 10:21:17.173432  775345 addons.go:515] duration metric: took 2.329589022s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 10:21:17.570761  775345 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:21:17.577025  775345 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:21:17.577070  775345 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:21:18.070485  775345 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:21:18.075052  775345 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:21:18.076253  775345 api_server.go:141] control plane version: v1.34.1
	I1101 10:21:18.076286  775345 api_server.go:131] duration metric: took 3.006491031s to wait for apiserver health ...
	I1101 10:21:18.076297  775345 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:21:18.080581  775345 system_pods.go:59] 8 kube-system pods found
	I1101 10:21:18.080630  775345 system_pods.go:61] "coredns-66bc5c9577-gn6zx" [a7bda15a-3bb6-4481-b103-cc8eed070995] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:21:18.080643  775345 system_pods.go:61] "etcd-newest-cni-006653" [e2c0df01-64cf-4a18-821f-527dddcf3772] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:21:18.080652  775345 system_pods.go:61] "kindnet-487js" [0400e397-aa86-4a6e-976e-ff1a3844727b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:21:18.080662  775345 system_pods.go:61] "kube-apiserver-newest-cni-006653" [2bd8a1b8-97ce-4f57-90a9-e523107f3bc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:21:18.080671  775345 system_pods.go:61] "kube-controller-manager-newest-cni-006653" [b95204ce-cd11-470d-add1-5c7ca7f0494d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:21:18.080683  775345 system_pods.go:61] "kube-proxy-kp445" [ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:21:18.080691  775345 system_pods.go:61] "kube-scheduler-newest-cni-006653" [431cf3e8-7ee3-4c54-8e86-21f4a7901987] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:21:18.080702  775345 system_pods.go:61] "storage-provisioner" [78945df3-ecd6-4d3d-aadb-3b0eb7fb8967] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:21:18.080713  775345 system_pods.go:74] duration metric: took 4.407136ms to wait for pod list to return data ...
	I1101 10:21:18.080727  775345 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:21:18.083068  775345 default_sa.go:45] found service account: "default"
	I1101 10:21:18.083099  775345 default_sa.go:55] duration metric: took 2.363908ms for default service account to be created ...
	I1101 10:21:18.083113  775345 kubeadm.go:587] duration metric: took 3.239455542s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:21:18.083135  775345 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:21:18.085773  775345 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:21:18.085846  775345 node_conditions.go:123] node cpu capacity is 8
	I1101 10:21:18.085861  775345 node_conditions.go:105] duration metric: took 2.721012ms to run NodePressure ...
	I1101 10:21:18.085876  775345 start.go:242] waiting for startup goroutines ...
	I1101 10:21:18.085883  775345 start.go:247] waiting for cluster config update ...
	I1101 10:21:18.085894  775345 start.go:256] writing updated cluster config ...
	I1101 10:21:18.086182  775345 ssh_runner.go:195] Run: rm -f paused
	I1101 10:21:18.152570  775345 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:21:18.154160  775345 out.go:179] * Done! kubectl is now configured to use "newest-cni-006653" cluster and "default" namespace by default
	W1101 10:21:14.977046  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	W1101 10:21:17.475208  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	I1101 10:21:16.140491  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:16.141117  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:16.141183  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:16.141241  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:16.180266  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:16.180314  734517 cri.go:89] found id: ""
	I1101 10:21:16.180327  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:16.180496  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:16.185864  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:16.185945  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:16.221184  734517 cri.go:89] found id: ""
	I1101 10:21:16.221219  734517 logs.go:282] 0 containers: []
	W1101 10:21:16.221235  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:16.221243  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:16.221303  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:16.255846  734517 cri.go:89] found id: ""
	I1101 10:21:16.255882  734517 logs.go:282] 0 containers: []
	W1101 10:21:16.255893  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:16.255902  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:16.255970  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:16.299501  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:16.299533  734517 cri.go:89] found id: ""
	I1101 10:21:16.299544  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:16.299612  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:16.307031  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:16.307119  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:16.367122  734517 cri.go:89] found id: ""
	I1101 10:21:16.367238  734517 logs.go:282] 0 containers: []
	W1101 10:21:16.367268  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:16.367288  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:16.367377  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:16.404438  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:16.404466  734517 cri.go:89] found id: ""
	I1101 10:21:16.404513  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:16.404579  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:16.409727  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:16.409792  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:16.456655  734517 cri.go:89] found id: ""
	I1101 10:21:16.456680  734517 logs.go:282] 0 containers: []
	W1101 10:21:16.456691  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:16.456699  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:16.456759  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:21:16.518616  734517 cri.go:89] found id: ""
	I1101 10:21:16.518650  734517 logs.go:282] 0 containers: []
	W1101 10:21:16.518662  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:21:16.518676  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:21:16.518693  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:21:16.549435  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:21:16.549507  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:21:16.640815  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:21:16.640850  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:21:16.640868  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:16.694366  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:21:16.694425  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:16.770807  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:21:16.770870  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:16.813677  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:21:16.813711  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:21:16.900769  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:21:16.900914  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:21:16.946397  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:21:16.946434  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:21:19.575106  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:19.575677  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:19.575744  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:19.575820  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:19.608382  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:19.608405  734517 cri.go:89] found id: ""
	I1101 10:21:19.608414  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:19.608471  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:19.613183  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:19.613264  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:19.644448  734517 cri.go:89] found id: ""
	I1101 10:21:19.644481  734517 logs.go:282] 0 containers: []
	W1101 10:21:19.644490  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:19.644498  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:19.644548  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:19.679274  734517 cri.go:89] found id: ""
	I1101 10:21:19.679311  734517 logs.go:282] 0 containers: []
	W1101 10:21:19.679323  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:19.679331  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:19.679395  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:19.714737  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:19.714765  734517 cri.go:89] found id: ""
	I1101 10:21:19.714775  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:19.714859  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:19.719718  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:19.719779  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:19.754585  734517 cri.go:89] found id: ""
	I1101 10:21:19.754613  734517 logs.go:282] 0 containers: []
	W1101 10:21:19.754622  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:19.754629  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:19.754695  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:19.794338  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:19.794364  734517 cri.go:89] found id: ""
	I1101 10:21:19.794374  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:19.794438  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:19.800064  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:19.800142  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:19.836178  734517 cri.go:89] found id: ""
	I1101 10:21:19.836205  734517 logs.go:282] 0 containers: []
	W1101 10:21:19.836216  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:19.836224  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:19.836277  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	
	
	==> CRI-O <==
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.381933757Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-kp445/POD" id=532d29e6-80b4-42ce-b7a5-59245600e4e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.382048024Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.383099232Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.383923403Z" level=info msg="Ran pod sandbox 5fba7305785d39d0b927243f57ab6f9f12aafcc171710bf66ba763dc47c744be with infra container: kube-system/kindnet-487js/POD" id=3e5be296-0fec-4768-bb1c-8eae0a28ed59 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.38545714Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1aa6a0ae-f655-49c1-98a9-3a0aff592185 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.385926491Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=532d29e6-80b4-42ce-b7a5-59245600e4e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.387159025Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d5aaeae0-4bde-4c16-849e-2f87bb51b7c5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.387668151Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.389136705Z" level=info msg="Creating container: kube-system/kindnet-487js/kindnet-cni" id=29643994-256c-41c5-b40c-ec1922e20ce7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.389285697Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.389449585Z" level=info msg="Ran pod sandbox a406aba54a6494314781bb627ef72fbc4c4888adc4e2549a01aa0f039da53d86 with infra container: kube-system/kube-proxy-kp445/POD" id=532d29e6-80b4-42ce-b7a5-59245600e4e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.39178023Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b8f72940-6128-4cf5-91f2-c026b3300ecf name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.393733673Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e10bdfd6-207c-46e5-a8d0-bf39cdaa6afe name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.394262215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.394774555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.395178172Z" level=info msg="Creating container: kube-system/kube-proxy-kp445/kube-proxy" id=88c0e789-eca3-4d18-920a-49656529dd8e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.395315377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.400042574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.400729315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.429088887Z" level=info msg="Created container 5f81dc39338faa288b5e42addd10e7486b7d4b85f61aa8fe4077cf9561e1a729: kube-system/kindnet-487js/kindnet-cni" id=29643994-256c-41c5-b40c-ec1922e20ce7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.430058067Z" level=info msg="Starting container: 5f81dc39338faa288b5e42addd10e7486b7d4b85f61aa8fe4077cf9561e1a729" id=0f9124bd-d029-4db4-b550-488e47e8fab1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.432468774Z" level=info msg="Started container" PID=1039 containerID=5f81dc39338faa288b5e42addd10e7486b7d4b85f61aa8fe4077cf9561e1a729 description=kube-system/kindnet-487js/kindnet-cni id=0f9124bd-d029-4db4-b550-488e47e8fab1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fba7305785d39d0b927243f57ab6f9f12aafcc171710bf66ba763dc47c744be
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.433974321Z" level=info msg="Created container 3b70b9eba589fbc2df8137342ab90c0de139b42dcd0cdba712add248e0a957fe: kube-system/kube-proxy-kp445/kube-proxy" id=88c0e789-eca3-4d18-920a-49656529dd8e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.434814091Z" level=info msg="Starting container: 3b70b9eba589fbc2df8137342ab90c0de139b42dcd0cdba712add248e0a957fe" id=b8539420-64d8-4db8-8ea3-00703811d4d0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.438303627Z" level=info msg="Started container" PID=1040 containerID=3b70b9eba589fbc2df8137342ab90c0de139b42dcd0cdba712add248e0a957fe description=kube-system/kube-proxy-kp445/kube-proxy id=b8539420-64d8-4db8-8ea3-00703811d4d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a406aba54a6494314781bb627ef72fbc4c4888adc4e2549a01aa0f039da53d86
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3b70b9eba589f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   a406aba54a649       kube-proxy-kp445                            kube-system
	5f81dc39338fa       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   5fba7305785d3       kindnet-487js                               kube-system
	7c09ddecdeca4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   322d5cedad390       etcd-newest-cni-006653                      kube-system
	922955453c813       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   95709fb3fb185       kube-apiserver-newest-cni-006653            kube-system
	c7f1e1f3c53e6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   986e29b0d9a81       kube-controller-manager-newest-cni-006653   kube-system
	49e471af6c5f0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   b7ad92f5761e3       kube-scheduler-newest-cni-006653            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-006653
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-006653
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=newest-cni-006653
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_20_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:20:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-006653
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:21:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:21:16 +0000   Sat, 01 Nov 2025 10:20:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:21:16 +0000   Sat, 01 Nov 2025 10:20:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:21:16 +0000   Sat, 01 Nov 2025 10:20:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 10:21:16 +0000   Sat, 01 Nov 2025 10:20:51 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-006653
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                e2a07147-2430-4ed4-a07b-b804bc96d00e
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-006653                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-487js                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-006653             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-006653    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-kp445                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-006653             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s (x8 over 32s)  kubelet          Node newest-cni-006653 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s (x8 over 32s)  kubelet          Node newest-cni-006653 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s (x8 over 32s)  kubelet          Node newest-cni-006653 status is now: NodeHasSufficientPID
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s                kubelet          Node newest-cni-006653 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s                kubelet          Node newest-cni-006653 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s                kubelet          Node newest-cni-006653 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22s                node-controller  Node newest-cni-006653 event: Registered Node newest-cni-006653 in Controller
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-006653 event: Registered Node newest-cni-006653 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [7c09ddecdeca46ff3ec1552a8c119fc453d012084c77937d37039c7713b8515b] <==
	{"level":"warn","ts":"2025-11-01T10:21:15.614787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.623093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.633306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.649187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.660200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.672787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.684017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.692536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.703448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.712301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.720275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.728446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.736275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.744812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.753164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.762926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.771447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.780703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.788644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.797503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.806295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.832521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.840262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.847732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.912694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57438","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:21:22 up  3:03,  0 user,  load average: 3.96, 3.65, 2.88
	Linux newest-cni-006653 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5f81dc39338faa288b5e42addd10e7486b7d4b85f61aa8fe4077cf9561e1a729] <==
	I1101 10:21:17.694283       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:21:17.694696       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:21:17.694876       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:21:17.694897       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:21:17.694932       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:21:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:21:17.897512       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:21:17.897541       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:21:17.897551       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:21:17.897670       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:21:18.490531       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:21:18.490896       1 metrics.go:72] Registering metrics
	I1101 10:21:18.490990       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [922955453c81342bf231488bc1c4788ba0de975b4453762ada023b741185a144] <==
	I1101 10:21:16.520550       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:21:16.521500       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:21:16.521510       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:21:16.521517       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:21:16.521524       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:21:16.532694       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 10:21:16.535678       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:21:16.537652       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:21:16.542702       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:21:16.549261       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:21:16.549295       1 policy_source.go:240] refreshing policies
	I1101 10:21:16.557187       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:21:16.905959       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:21:16.944989       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:21:16.976512       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:21:16.990442       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:21:17.000465       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:21:17.052596       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.69.172"}
	I1101 10:21:17.066018       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.74.134"}
	I1101 10:21:17.423716       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:21:19.724440       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:21:19.724476       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:21:19.777090       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:21:19.926147       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:21:19.926147       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c7f1e1f3c53e69773b4e36a83142cc7f8552cca4f888399d85ba1875b5ebf29f] <==
	I1101 10:21:19.414648       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:21:19.417933       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:21:19.420070       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:21:19.421165       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:21:19.421188       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:21:19.421199       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:21:19.421500       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:21:19.421619       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:21:19.421995       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:21:19.423063       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:21:19.425301       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:21:19.426478       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:21:19.426507       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:21:19.426541       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:21:19.426592       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:21:19.426605       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:21:19.426614       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:21:19.428734       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:21:19.428866       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:21:19.438209       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:21:19.438241       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:21:19.438256       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:21:19.445552       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:21:19.446765       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:21:19.447830       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [3b70b9eba589fbc2df8137342ab90c0de139b42dcd0cdba712add248e0a957fe] <==
	I1101 10:21:17.481911       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:21:17.548447       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:21:17.649131       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:21:17.649198       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:21:17.649327       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:21:17.677318       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:21:17.677381       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:21:17.684695       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:21:17.685195       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:21:17.685241       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:21:17.686749       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:21:17.686867       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:21:17.686938       1 config.go:309] "Starting node config controller"
	I1101 10:21:17.686916       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:21:17.686953       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:21:17.686827       1 config.go:200] "Starting service config controller"
	I1101 10:21:17.686977       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:21:17.686945       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:21:17.787807       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:21:17.787821       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:21:17.787884       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:21:17.787897       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [49e471af6c5f092029c6717bae1e37da0b4381d85dfad7b5da552c19d207269c] <==
	I1101 10:21:15.383195       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:21:16.471614       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:21:16.471668       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:21:16.471682       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:21:16.471692       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:21:16.505337       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:21:16.505463       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:21:16.510395       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:21:16.510767       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:21:16.512119       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:21:16.513122       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:21:16.612697       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: E1101 10:21:16.112109     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-006653\" not found" node="newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.470954     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: E1101 10:21:16.602309     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-006653\" already exists" pod="kube-system/etcd-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.602351     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: E1101 10:21:16.613171     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-006653\" already exists" pod="kube-system/kube-apiserver-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.613227     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: E1101 10:21:16.623113     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-006653\" already exists" pod="kube-system/kube-controller-manager-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.623159     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: E1101 10:21:16.632424     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-006653\" already exists" pod="kube-system/kube-scheduler-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.652138     670 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.652298     670 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.652353     670 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.654342     670 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.067771     670 apiserver.go:52] "Watching apiserver"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.070399     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-006653"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: E1101 10:21:17.080738     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-006653\" already exists" pod="kube-system/kube-controller-manager-newest-cni-006653"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.170121     670 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.262393     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0400e397-aa86-4a6e-976e-ff1a3844727b-cni-cfg\") pod \"kindnet-487js\" (UID: \"0400e397-aa86-4a6e-976e-ff1a3844727b\") " pod="kube-system/kindnet-487js"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.262450     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0400e397-aa86-4a6e-976e-ff1a3844727b-xtables-lock\") pod \"kindnet-487js\" (UID: \"0400e397-aa86-4a6e-976e-ff1a3844727b\") " pod="kube-system/kindnet-487js"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.262651     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0400e397-aa86-4a6e-976e-ff1a3844727b-lib-modules\") pod \"kindnet-487js\" (UID: \"0400e397-aa86-4a6e-976e-ff1a3844727b\") " pod="kube-system/kindnet-487js"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.262713     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b-lib-modules\") pod \"kube-proxy-kp445\" (UID: \"ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b\") " pod="kube-system/kube-proxy-kp445"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.262743     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b-xtables-lock\") pod \"kube-proxy-kp445\" (UID: \"ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b\") " pod="kube-system/kube-proxy-kp445"
	Nov 01 10:21:19 newest-cni-006653 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:21:19 newest-cni-006653 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:21:19 newest-cni-006653 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-006653 -n newest-cni-006653
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-006653 -n newest-cni-006653: exit status 2 (370.230937ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-006653 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gn6zx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-564f7 kubernetes-dashboard-855c9754f9-zlwtr
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-006653 describe pod coredns-66bc5c9577-gn6zx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-564f7 kubernetes-dashboard-855c9754f9-zlwtr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-006653 describe pod coredns-66bc5c9577-gn6zx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-564f7 kubernetes-dashboard-855c9754f9-zlwtr: exit status 1 (78.685377ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gn6zx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-564f7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-zlwtr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-006653 describe pod coredns-66bc5c9577-gn6zx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-564f7 kubernetes-dashboard-855c9754f9-zlwtr: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-006653
helpers_test.go:243: (dbg) docker inspect newest-cni-006653:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64",
	        "Created": "2025-11-01T10:20:40.630212993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 775547,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:21:07.628914075Z",
	            "FinishedAt": "2025-11-01T10:21:06.668479491Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64/hostname",
	        "HostsPath": "/var/lib/docker/containers/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64/hosts",
	        "LogPath": "/var/lib/docker/containers/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64/91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64-json.log",
	        "Name": "/newest-cni-006653",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-006653:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-006653",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "91a32a4040ae1d9009bdeb1e6d6b91a05f53441fab7a04836d1306a797263d64",
	                "LowerDir": "/var/lib/docker/overlay2/c10def8fe79d863bddcf542dfd2838cdfe2bb73d219aa8d27f9ddb8feb62b4da-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c10def8fe79d863bddcf542dfd2838cdfe2bb73d219aa8d27f9ddb8feb62b4da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c10def8fe79d863bddcf542dfd2838cdfe2bb73d219aa8d27f9ddb8feb62b4da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c10def8fe79d863bddcf542dfd2838cdfe2bb73d219aa8d27f9ddb8feb62b4da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-006653",
	                "Source": "/var/lib/docker/volumes/newest-cni-006653/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-006653",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-006653",
	                "name.minikube.sigs.k8s.io": "newest-cni-006653",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3037afbf122899f407145aa4bca26f74da21e0b95c3162bde124afc8adb9a15",
	            "SandboxKey": "/var/run/docker/netns/e3037afbf122",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33208"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33209"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33212"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33210"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33211"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-006653": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:d7:93:77:28:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7c02c09c0ce161b2b9f0f4d8dfbab9af05a638642c6978f8142ed5d4368be572",
	                    "EndpointID": "a222b9f83a6e8b9fb089f29b07febc50747ef06c88a9d0da5f06d27858859657",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-006653",
	                        "91a32a4040ae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-006653 -n newest-cni-006653
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-006653 -n newest-cni-006653: exit status 2 (351.649434ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-006653 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ addons  │ enable dashboard -p no-preload-680879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:19 UTC │
	│ start   │ -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:19 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ old-k8s-version-556573 image list --format=json                                                                                                                                                                                               │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p old-k8s-version-556573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ no-preload-680879 image list --format=json                                                                                                                                                                                                    │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p no-preload-680879 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p embed-certs-678014 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-678014           │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p no-preload-680879                                                                                                                                                                                                                          │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p no-preload-680879                                                                                                                                                                                                                          │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p disable-driver-mounts-083568                                                                                                                                                                                                               │ disable-driver-mounts-083568 │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p default-k8s-diff-port-535119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:21 UTC │
	│ start   │ -p cert-expiration-577441 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-577441       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p cert-expiration-577441                                                                                                                                                                                                                     │ cert-expiration-577441       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p newest-cni-006653 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-006653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	│ stop    │ -p newest-cni-006653 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ addons  │ enable dashboard -p newest-cni-006653 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ start   │ -p newest-cni-006653 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-535119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-535119 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	│ image   │ newest-cni-006653 image list --format=json                                                                                                                                                                                                    │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ pause   │ -p newest-cni-006653 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:21:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:21:07.368818  775345 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:21:07.368991  775345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:21:07.369005  775345 out.go:374] Setting ErrFile to fd 2...
	I1101 10:21:07.369011  775345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:21:07.369282  775345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:21:07.369804  775345 out.go:368] Setting JSON to false
	I1101 10:21:07.372138  775345 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11004,"bootTime":1761981463,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:21:07.372273  775345 start.go:143] virtualization: kvm guest
	I1101 10:21:07.374034  775345 out.go:179] * [newest-cni-006653] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:21:07.375251  775345 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:21:07.375276  775345 notify.go:221] Checking for updates...
	I1101 10:21:07.377188  775345 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:21:07.378236  775345 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:21:07.379230  775345 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:21:07.380231  775345 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:21:07.381285  775345 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:21:07.382730  775345 config.go:182] Loaded profile config "newest-cni-006653": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:07.383283  775345 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:21:07.412824  775345 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:21:07.413000  775345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:21:07.477031  775345 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:21:07.464068959 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:21:07.477164  775345 docker.go:319] overlay module found
	I1101 10:21:07.479294  775345 out.go:179] * Using the docker driver based on existing profile
	I1101 10:21:07.480228  775345 start.go:309] selected driver: docker
	I1101 10:21:07.480246  775345 start.go:930] validating driver "docker" against &{Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:21:07.480361  775345 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:21:07.481141  775345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:21:07.547108  775345 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:21:07.535480294 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:21:07.547439  775345 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:21:07.547475  775345 cni.go:84] Creating CNI manager for ""
	I1101 10:21:07.547541  775345 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:21:07.547651  775345 start.go:353] cluster config:
	{Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:21:07.549748  775345 out.go:179] * Starting "newest-cni-006653" primary control-plane node in "newest-cni-006653" cluster
	I1101 10:21:07.550569  775345 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:21:07.551613  775345 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:21:07.552531  775345 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:21:07.552589  775345 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:21:07.552604  775345 cache.go:59] Caching tarball of preloaded images
	I1101 10:21:07.552645  775345 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:21:07.552722  775345 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:21:07.552741  775345 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:21:07.552950  775345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/config.json ...
	I1101 10:21:07.577411  775345 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:21:07.577438  775345 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:21:07.577479  775345 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:21:07.577518  775345 start.go:360] acquireMachinesLock for newest-cni-006653: {Name:mkf496d0b80c7855406646357bd774886a0856a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:21:07.577606  775345 start.go:364] duration metric: took 56.04µs to acquireMachinesLock for "newest-cni-006653"
	I1101 10:21:07.577634  775345 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:21:07.577646  775345 fix.go:54] fixHost starting: 
	I1101 10:21:07.577966  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:07.598527  775345 fix.go:112] recreateIfNeeded on newest-cni-006653: state=Stopped err=<nil>
	W1101 10:21:07.598568  775345 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:21:05.475588  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	W1101 10:21:07.475900  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	I1101 10:21:06.515156  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:06.516003  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:06.516072  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:06.516133  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:06.558213  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:06.558244  734517 cri.go:89] found id: ""
	I1101 10:21:06.558259  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:06.558332  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:06.564141  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:06.564236  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:06.600088  734517 cri.go:89] found id: ""
	I1101 10:21:06.600122  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.600134  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:06.600142  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:06.600216  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:06.638676  734517 cri.go:89] found id: ""
	I1101 10:21:06.638722  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.638734  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:06.638744  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:06.638815  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:06.676103  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:06.676133  734517 cri.go:89] found id: ""
	I1101 10:21:06.676144  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:06.676203  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:06.681722  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:06.681799  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:06.719509  734517 cri.go:89] found id: ""
	I1101 10:21:06.719543  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.719554  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:06.719563  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:06.719637  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:06.752396  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:06.752533  734517 cri.go:89] found id: ""
	I1101 10:21:06.752545  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:06.752603  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:06.757697  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:06.757763  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:06.790052  734517 cri.go:89] found id: ""
	I1101 10:21:06.790091  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.790103  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:06.790113  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:06.790186  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:21:06.821398  734517 cri.go:89] found id: ""
	I1101 10:21:06.821436  734517 logs.go:282] 0 containers: []
	W1101 10:21:06.821450  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:21:06.821475  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:21:06.821495  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:06.853392  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:21:06.853425  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:21:06.912616  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:21:06.912661  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:21:06.947720  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:21:06.947759  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:21:07.058980  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:21:07.059023  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:21:07.080200  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:21:07.080238  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:21:07.150168  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:21:07.150197  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:21:07.150221  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:07.191996  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:21:07.192035  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:09.754304  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:09.754761  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:09.754823  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:09.754892  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:09.787037  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:09.787065  734517 cri.go:89] found id: ""
	I1101 10:21:09.787074  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:09.787139  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:09.791637  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:09.791724  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:09.822734  734517 cri.go:89] found id: ""
	I1101 10:21:09.822762  734517 logs.go:282] 0 containers: []
	W1101 10:21:09.822772  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:09.822778  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:09.822827  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:07.600177  775345 out.go:252] * Restarting existing docker container for "newest-cni-006653" ...
	I1101 10:21:07.600261  775345 cli_runner.go:164] Run: docker start newest-cni-006653
	I1101 10:21:07.887226  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:07.907473  775345 kic.go:430] container "newest-cni-006653" state is running.
	I1101 10:21:07.908052  775345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-006653
	I1101 10:21:07.930273  775345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/config.json ...
	I1101 10:21:07.930608  775345 machine.go:94] provisionDockerMachine start ...
	I1101 10:21:07.930697  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:07.950868  775345 main.go:143] libmachine: Using SSH client type: native
	I1101 10:21:07.951192  775345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1101 10:21:07.951214  775345 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:21:07.951934  775345 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59284->127.0.0.1:33208: read: connection reset by peer
	I1101 10:21:11.101533  775345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-006653
	
	I1101 10:21:11.101567  775345 ubuntu.go:182] provisioning hostname "newest-cni-006653"
	I1101 10:21:11.101627  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:11.121992  775345 main.go:143] libmachine: Using SSH client type: native
	I1101 10:21:11.122272  775345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1101 10:21:11.122293  775345 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-006653 && echo "newest-cni-006653" | sudo tee /etc/hostname
	I1101 10:21:11.278332  775345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-006653
	
	I1101 10:21:11.278417  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:11.298025  775345 main.go:143] libmachine: Using SSH client type: native
	I1101 10:21:11.298366  775345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1101 10:21:11.298396  775345 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-006653' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-006653/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-006653' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:21:11.446407  775345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:21:11.446446  775345 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21832-514161/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-514161/.minikube}
	I1101 10:21:11.446476  775345 ubuntu.go:190] setting up certificates
	I1101 10:21:11.446494  775345 provision.go:84] configureAuth start
	I1101 10:21:11.446585  775345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-006653
	I1101 10:21:11.467021  775345 provision.go:143] copyHostCerts
	I1101 10:21:11.467089  775345 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem, removing ...
	I1101 10:21:11.467107  775345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem
	I1101 10:21:11.467188  775345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/ca.pem (1078 bytes)
	I1101 10:21:11.467319  775345 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem, removing ...
	I1101 10:21:11.467328  775345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem
	I1101 10:21:11.467356  775345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/cert.pem (1123 bytes)
	I1101 10:21:11.467431  775345 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem, removing ...
	I1101 10:21:11.467438  775345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem
	I1101 10:21:11.467464  775345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-514161/.minikube/key.pem (1675 bytes)
	I1101 10:21:11.467535  775345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem org=jenkins.newest-cni-006653 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-006653]
	I1101 10:21:11.656041  775345 provision.go:177] copyRemoteCerts
	I1101 10:21:11.656114  775345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:21:11.656155  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:11.675562  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:11.780483  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:21:11.801492  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:21:11.822639  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:21:11.844599  775345 provision.go:87] duration metric: took 398.086986ms to configureAuth
	I1101 10:21:11.844629  775345 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:21:11.844827  775345 config.go:182] Loaded profile config "newest-cni-006653": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:11.844986  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:11.865032  775345 main.go:143] libmachine: Using SSH client type: native
	I1101 10:21:11.865396  775345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I1101 10:21:11.865423  775345 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:21:12.151927  775345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:21:12.151959  775345 machine.go:97] duration metric: took 4.221331346s to provisionDockerMachine
	I1101 10:21:12.151974  775345 start.go:293] postStartSetup for "newest-cni-006653" (driver="docker")
	I1101 10:21:12.151984  775345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:21:12.152046  775345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:21:12.152087  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:12.172073  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:12.276880  775345 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:21:12.281085  775345 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:21:12.281117  775345 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:21:12.281130  775345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/addons for local assets ...
	I1101 10:21:12.281178  775345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-514161/.minikube/files for local assets ...
	I1101 10:21:12.281267  775345 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem -> 5176872.pem in /etc/ssl/certs
	I1101 10:21:12.281363  775345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:21:12.289865  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:21:12.310993  775345 start.go:296] duration metric: took 159.002326ms for postStartSetup
	I1101 10:21:12.311102  775345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:21:12.311149  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:12.330337  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	W1101 10:21:09.974921  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	W1101 10:21:12.475062  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	I1101 10:21:12.430860  775345 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:21:12.436672  775345 fix.go:56] duration metric: took 4.859015473s for fixHost
	I1101 10:21:12.436705  775345 start.go:83] releasing machines lock for "newest-cni-006653", held for 4.859082301s
	I1101 10:21:12.436786  775345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-006653
	I1101 10:21:12.456783  775345 ssh_runner.go:195] Run: cat /version.json
	I1101 10:21:12.456896  775345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:21:12.456902  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:12.457005  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:12.477799  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:12.478095  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:12.637349  775345 ssh_runner.go:195] Run: systemctl --version
	I1101 10:21:12.645138  775345 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:21:12.685879  775345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:21:12.691371  775345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:21:12.691434  775345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:21:12.700901  775345 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:21:12.700930  775345 start.go:496] detecting cgroup driver to use...
	I1101 10:21:12.700976  775345 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:21:12.701037  775345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:21:12.717316  775345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:21:12.733635  775345 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:21:12.733689  775345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:21:12.750497  775345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:21:12.767331  775345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:21:12.854808  775345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:21:12.938672  775345 docker.go:234] disabling docker service ...
	I1101 10:21:12.938746  775345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:21:12.957137  775345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:21:12.972571  775345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:21:13.074081  775345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:21:13.169823  775345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:21:13.184846  775345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:21:13.204139  775345 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:21:13.204216  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.215765  775345 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:21:13.215867  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.227103  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.238022  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.249272  775345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:21:13.259995  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.271255  775345 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.282311  775345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:21:13.294977  775345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:21:13.304502  775345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:21:13.313752  775345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:21:13.405995  775345 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:21:13.532643  775345 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:21:13.532727  775345 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:21:13.537752  775345 start.go:564] Will wait 60s for crictl version
	I1101 10:21:13.537818  775345 ssh_runner.go:195] Run: which crictl
	I1101 10:21:13.541787  775345 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:21:13.571974  775345 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:21:13.572085  775345 ssh_runner.go:195] Run: crio --version
	I1101 10:21:13.608295  775345 ssh_runner.go:195] Run: crio --version
	I1101 10:21:13.643017  775345 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:21:13.643996  775345 cli_runner.go:164] Run: docker network inspect newest-cni-006653 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:21:13.662889  775345 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:21:13.667996  775345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:21:13.681178  775345 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:21:09.860041  734517 cri.go:89] found id: ""
	I1101 10:21:09.860070  734517 logs.go:282] 0 containers: []
	W1101 10:21:09.860080  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:09.860089  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:09.860142  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:09.890661  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:09.890692  734517 cri.go:89] found id: ""
	I1101 10:21:09.890705  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:09.890778  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:09.895701  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:09.895778  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:09.927449  734517 cri.go:89] found id: ""
	I1101 10:21:09.927477  734517 logs.go:282] 0 containers: []
	W1101 10:21:09.927488  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:09.927505  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:09.927570  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:09.959698  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:09.959729  734517 cri.go:89] found id: ""
	I1101 10:21:09.959742  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:09.959803  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:09.964405  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:09.964502  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:09.995953  734517 cri.go:89] found id: ""
	I1101 10:21:09.995991  734517 logs.go:282] 0 containers: []
	W1101 10:21:09.996004  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:09.996015  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:09.996073  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:21:10.030085  734517 cri.go:89] found id: ""
	I1101 10:21:10.030117  734517 logs.go:282] 0 containers: []
	W1101 10:21:10.030126  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:21:10.030139  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:21:10.030154  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:10.060407  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:21:10.060441  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:21:10.117644  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:21:10.117690  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:21:10.152178  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:21:10.152207  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:21:10.242540  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:21:10.242598  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:21:10.263401  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:21:10.263441  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:21:10.324595  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:21:10.324617  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:21:10.324633  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:10.362674  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:21:10.362718  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:12.922943  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:12.923478  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:12.923551  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:12.923612  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:12.957773  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:12.957793  734517 cri.go:89] found id: ""
	I1101 10:21:12.957801  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:12.957878  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:12.962381  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:12.962483  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:12.995296  734517 cri.go:89] found id: ""
	I1101 10:21:12.995333  734517 logs.go:282] 0 containers: []
	W1101 10:21:12.995344  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:12.995352  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:12.995430  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:13.033380  734517 cri.go:89] found id: ""
	I1101 10:21:13.033414  734517 logs.go:282] 0 containers: []
	W1101 10:21:13.033426  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:13.033435  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:13.033506  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:13.064948  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:13.064970  734517 cri.go:89] found id: ""
	I1101 10:21:13.064979  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:13.065041  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:13.069789  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:13.069887  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:13.100580  734517 cri.go:89] found id: ""
	I1101 10:21:13.100614  734517 logs.go:282] 0 containers: []
	W1101 10:21:13.100626  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:13.100635  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:13.100686  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:13.136326  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:13.136359  734517 cri.go:89] found id: ""
	I1101 10:21:13.136370  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:13.136429  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:13.141519  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:13.141623  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:13.174096  734517 cri.go:89] found id: ""
	I1101 10:21:13.174121  734517 logs.go:282] 0 containers: []
	W1101 10:21:13.174130  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:13.174137  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:13.174185  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:21:13.207618  734517 cri.go:89] found id: ""
	I1101 10:21:13.207650  734517 logs.go:282] 0 containers: []
	W1101 10:21:13.207662  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:21:13.207676  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:21:13.207692  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:21:13.228225  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:21:13.228269  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:21:13.296888  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:21:13.296924  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:21:13.296945  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:13.334981  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:21:13.335028  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:13.397890  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:21:13.397936  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:13.430702  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:21:13.430732  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:21:13.495394  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:21:13.495444  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:21:13.533429  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:21:13.533456  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:21:13.682134  775345 kubeadm.go:884] updating cluster {Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:21:13.682285  775345 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:21:13.682351  775345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:21:13.719917  775345 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:21:13.719941  775345 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:21:13.719997  775345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:21:13.749397  775345 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:21:13.749421  775345 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:21:13.749429  775345 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:21:13.749550  775345 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-006653 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:21:13.749653  775345 ssh_runner.go:195] Run: crio config
	I1101 10:21:13.802432  775345 cni.go:84] Creating CNI manager for ""
	I1101 10:21:13.802462  775345 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:21:13.802489  775345 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:21:13.802551  775345 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-006653 NodeName:newest-cni-006653 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:21:13.802705  775345 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-006653"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:21:13.802774  775345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:21:13.812295  775345 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:21:13.812378  775345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:21:13.821815  775345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:21:13.837568  775345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:21:13.852297  775345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1101 10:21:13.866722  775345 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:21:13.871100  775345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:21:13.882942  775345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:21:13.967554  775345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:21:13.993768  775345 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653 for IP: 192.168.76.2
	I1101 10:21:13.993792  775345 certs.go:195] generating shared ca certs ...
	I1101 10:21:13.993815  775345 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:13.994012  775345 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:21:13.994053  775345 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:21:13.994061  775345 certs.go:257] generating profile certs ...
	I1101 10:21:13.994169  775345 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/client.key
	I1101 10:21:13.994235  775345 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.key.c43daf58
	I1101 10:21:13.994270  775345 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.key
	I1101 10:21:13.994378  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:21:13.994412  775345 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:21:13.994422  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:21:13.994446  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:21:13.994467  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:21:13.994494  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:21:13.994533  775345 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:21:13.995177  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:21:14.017811  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:21:14.041370  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:21:14.063070  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:21:14.090442  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:21:14.111563  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:21:14.132592  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:21:14.152885  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/newest-cni-006653/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:21:14.173513  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:21:14.194543  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:21:14.215737  775345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:21:14.237400  775345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:21:14.252487  775345 ssh_runner.go:195] Run: openssl version
	I1101 10:21:14.260121  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:21:14.271081  775345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:21:14.276116  775345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:21:14.276186  775345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:21:14.313235  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:21:14.323271  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:21:14.334255  775345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:21:14.339072  775345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:21:14.339149  775345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:21:14.377267  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:21:14.387359  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:21:14.398061  775345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:21:14.402635  775345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:21:14.402717  775345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:21:14.440665  775345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:21:14.451644  775345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:21:14.456568  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:21:14.497718  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:21:14.545689  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:21:14.597289  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:21:14.650890  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:21:14.703137  775345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:21:14.742240  775345 kubeadm.go:401] StartCluster: {Name:newest-cni-006653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-006653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:21:14.742382  775345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:21:14.742487  775345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:21:14.779439  775345 cri.go:89] found id: "7c09ddecdeca46ff3ec1552a8c119fc453d012084c77937d37039c7713b8515b"
	I1101 10:21:14.779467  775345 cri.go:89] found id: "922955453c81342bf231488bc1c4788ba0de975b4453762ada023b741185a144"
	I1101 10:21:14.779473  775345 cri.go:89] found id: "c7f1e1f3c53e69773b4e36a83142cc7f8552cca4f888399d85ba1875b5ebf29f"
	I1101 10:21:14.779477  775345 cri.go:89] found id: "49e471af6c5f092029c6717bae1e37da0b4381d85dfad7b5da552c19d207269c"
	I1101 10:21:14.779495  775345 cri.go:89] found id: ""
	I1101 10:21:14.779547  775345 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:21:14.798690  775345 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:21:14Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:21:14.798775  775345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:21:14.810055  775345 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:21:14.810075  775345 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:21:14.810127  775345 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:21:14.821271  775345 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:21:14.822995  775345 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-006653" does not appear in /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:21:14.823931  775345 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-514161/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-006653" cluster setting kubeconfig missing "newest-cni-006653" context setting]
	I1101 10:21:14.825362  775345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:14.828027  775345 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:21:14.840116  775345 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 10:21:14.840164  775345 kubeadm.go:602] duration metric: took 30.082653ms to restartPrimaryControlPlane
	I1101 10:21:14.840178  775345 kubeadm.go:403] duration metric: took 97.950111ms to StartCluster
	I1101 10:21:14.840202  775345 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:14.840292  775345 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:21:14.842793  775345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:14.843615  775345 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:21:14.843831  775345 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:21:14.843950  775345 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-006653"
	I1101 10:21:14.843973  775345 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-006653"
	W1101 10:21:14.843985  775345 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:21:14.844018  775345 host.go:66] Checking if "newest-cni-006653" exists ...
	I1101 10:21:14.844087  775345 addons.go:70] Setting dashboard=true in profile "newest-cni-006653"
	I1101 10:21:14.844108  775345 addons.go:239] Setting addon dashboard=true in "newest-cni-006653"
	W1101 10:21:14.844115  775345 addons.go:248] addon dashboard should already be in state true
	I1101 10:21:14.844139  775345 host.go:66] Checking if "newest-cni-006653" exists ...
	I1101 10:21:14.843915  775345 config.go:182] Loaded profile config "newest-cni-006653": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:14.844318  775345 addons.go:70] Setting default-storageclass=true in profile "newest-cni-006653"
	I1101 10:21:14.844352  775345 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-006653"
	I1101 10:21:14.844561  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:14.844561  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:14.844727  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:14.847357  775345 out.go:179] * Verifying Kubernetes components...
	I1101 10:21:14.849672  775345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:21:14.877374  775345 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:21:14.878757  775345 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:21:14.878783  775345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:21:14.878934  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:14.879347  775345 addons.go:239] Setting addon default-storageclass=true in "newest-cni-006653"
	W1101 10:21:14.879369  775345 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:21:14.879400  775345 host.go:66] Checking if "newest-cni-006653" exists ...
	I1101 10:21:14.879894  775345 cli_runner.go:164] Run: docker container inspect newest-cni-006653 --format={{.State.Status}}
	I1101 10:21:14.883519  775345 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:21:14.884583  775345 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:21:14.885685  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:21:14.885714  775345 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:21:14.885798  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:14.913363  775345 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:21:14.913437  775345 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:21:14.913516  775345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-006653
	I1101 10:21:14.920058  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:14.928765  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:14.950655  775345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/newest-cni-006653/id_rsa Username:docker}
	I1101 10:21:15.031938  775345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:21:15.051569  775345 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:21:15.051678  775345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:21:15.053736  775345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:21:15.062921  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:21:15.062952  775345 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:21:15.068552  775345 api_server.go:72] duration metric: took 224.885945ms to wait for apiserver process to appear ...
	I1101 10:21:15.069778  775345 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:21:15.069830  775345 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:21:15.081144  775345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:21:15.083359  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:21:15.083385  775345 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:21:15.101356  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:21:15.101388  775345 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:21:15.128679  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:21:15.128710  775345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:21:15.146757  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:21:15.146787  775345 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:21:15.165586  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:21:15.165620  775345 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:21:15.182614  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:21:15.182646  775345 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:21:15.201788  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:21:15.201820  775345 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:21:15.218759  775345 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:21:15.218792  775345 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:21:15.240068  775345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:21:16.444318  775345 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 10:21:16.444367  775345 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 10:21:16.444389  775345 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:21:16.456535  775345 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 10:21:16.456566  775345 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 10:21:16.570120  775345 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:21:16.579924  775345 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:21:16.579969  775345 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:21:17.070474  775345 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:21:17.078357  775345 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:21:17.078395  775345 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:21:17.158635  775345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.104857927s)
	I1101 10:21:17.158701  775345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.077520733s)
	I1101 10:21:17.158895  775345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.918742923s)
	I1101 10:21:17.160236  775345 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-006653 addons enable metrics-server
	
	I1101 10:21:17.172330  775345 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 10:21:17.173432  775345 addons.go:515] duration metric: took 2.329589022s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 10:21:17.570761  775345 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:21:17.577025  775345 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:21:17.577070  775345 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:21:18.070485  775345 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 10:21:18.075052  775345 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 10:21:18.076253  775345 api_server.go:141] control plane version: v1.34.1
	I1101 10:21:18.076286  775345 api_server.go:131] duration metric: took 3.006491031s to wait for apiserver health ...
	I1101 10:21:18.076297  775345 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:21:18.080581  775345 system_pods.go:59] 8 kube-system pods found
	I1101 10:21:18.080630  775345 system_pods.go:61] "coredns-66bc5c9577-gn6zx" [a7bda15a-3bb6-4481-b103-cc8eed070995] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:21:18.080643  775345 system_pods.go:61] "etcd-newest-cni-006653" [e2c0df01-64cf-4a18-821f-527dddcf3772] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:21:18.080652  775345 system_pods.go:61] "kindnet-487js" [0400e397-aa86-4a6e-976e-ff1a3844727b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:21:18.080662  775345 system_pods.go:61] "kube-apiserver-newest-cni-006653" [2bd8a1b8-97ce-4f57-90a9-e523107f3bc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:21:18.080671  775345 system_pods.go:61] "kube-controller-manager-newest-cni-006653" [b95204ce-cd11-470d-add1-5c7ca7f0494d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:21:18.080683  775345 system_pods.go:61] "kube-proxy-kp445" [ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:21:18.080691  775345 system_pods.go:61] "kube-scheduler-newest-cni-006653" [431cf3e8-7ee3-4c54-8e86-21f4a7901987] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:21:18.080702  775345 system_pods.go:61] "storage-provisioner" [78945df3-ecd6-4d3d-aadb-3b0eb7fb8967] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:21:18.080713  775345 system_pods.go:74] duration metric: took 4.407136ms to wait for pod list to return data ...
	I1101 10:21:18.080727  775345 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:21:18.083068  775345 default_sa.go:45] found service account: "default"
	I1101 10:21:18.083099  775345 default_sa.go:55] duration metric: took 2.363908ms for default service account to be created ...
	I1101 10:21:18.083113  775345 kubeadm.go:587] duration metric: took 3.239455542s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:21:18.083135  775345 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:21:18.085773  775345 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:21:18.085846  775345 node_conditions.go:123] node cpu capacity is 8
	I1101 10:21:18.085861  775345 node_conditions.go:105] duration metric: took 2.721012ms to run NodePressure ...
	I1101 10:21:18.085876  775345 start.go:242] waiting for startup goroutines ...
	I1101 10:21:18.085883  775345 start.go:247] waiting for cluster config update ...
	I1101 10:21:18.085894  775345 start.go:256] writing updated cluster config ...
	I1101 10:21:18.086182  775345 ssh_runner.go:195] Run: rm -f paused
	I1101 10:21:18.152570  775345 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:21:18.154160  775345 out.go:179] * Done! kubectl is now configured to use "newest-cni-006653" cluster and "default" namespace by default
	W1101 10:21:14.977046  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	W1101 10:21:17.475208  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	I1101 10:21:16.140491  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:16.141117  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:16.141183  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:16.141241  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:16.180266  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:16.180314  734517 cri.go:89] found id: ""
	I1101 10:21:16.180327  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:16.180496  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:16.185864  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:16.185945  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:16.221184  734517 cri.go:89] found id: ""
	I1101 10:21:16.221219  734517 logs.go:282] 0 containers: []
	W1101 10:21:16.221235  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:16.221243  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:16.221303  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:16.255846  734517 cri.go:89] found id: ""
	I1101 10:21:16.255882  734517 logs.go:282] 0 containers: []
	W1101 10:21:16.255893  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:16.255902  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:16.255970  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:16.299501  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:16.299533  734517 cri.go:89] found id: ""
	I1101 10:21:16.299544  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:16.299612  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:16.307031  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:16.307119  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:16.367122  734517 cri.go:89] found id: ""
	I1101 10:21:16.367238  734517 logs.go:282] 0 containers: []
	W1101 10:21:16.367268  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:16.367288  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:16.367377  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:16.404438  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:16.404466  734517 cri.go:89] found id: ""
	I1101 10:21:16.404513  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:16.404579  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:16.409727  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:16.409792  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:16.456655  734517 cri.go:89] found id: ""
	I1101 10:21:16.456680  734517 logs.go:282] 0 containers: []
	W1101 10:21:16.456691  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:16.456699  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:16.456759  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:21:16.518616  734517 cri.go:89] found id: ""
	I1101 10:21:16.518650  734517 logs.go:282] 0 containers: []
	W1101 10:21:16.518662  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:21:16.518676  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:21:16.518693  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:21:16.549435  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:21:16.549507  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:21:16.640815  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:21:16.640850  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:21:16.640868  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:16.694366  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:21:16.694425  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:16.770807  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:21:16.770870  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:16.813677  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:21:16.813711  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:21:16.900769  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:21:16.900914  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:21:16.946397  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:21:16.946434  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:21:19.575106  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:19.575677  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:19.575744  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:19.575820  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:19.608382  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:19.608405  734517 cri.go:89] found id: ""
	I1101 10:21:19.608414  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:19.608471  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:19.613183  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:19.613264  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:19.644448  734517 cri.go:89] found id: ""
	I1101 10:21:19.644481  734517 logs.go:282] 0 containers: []
	W1101 10:21:19.644490  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:19.644498  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:19.644548  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:19.679274  734517 cri.go:89] found id: ""
	I1101 10:21:19.679311  734517 logs.go:282] 0 containers: []
	W1101 10:21:19.679323  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:19.679331  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:19.679395  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:19.714737  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:19.714765  734517 cri.go:89] found id: ""
	I1101 10:21:19.714775  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:19.714859  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:19.719718  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:19.719779  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:19.754585  734517 cri.go:89] found id: ""
	I1101 10:21:19.754613  734517 logs.go:282] 0 containers: []
	W1101 10:21:19.754622  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:19.754629  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:19.754695  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:19.794338  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:19.794364  734517 cri.go:89] found id: ""
	I1101 10:21:19.794374  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:19.794438  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:19.800064  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:19.800142  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:19.836178  734517 cri.go:89] found id: ""
	I1101 10:21:19.836205  734517 logs.go:282] 0 containers: []
	W1101 10:21:19.836216  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:19.836224  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:19.836277  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	W1101 10:21:19.475355  760328 node_ready.go:57] node "embed-certs-678014" has "Ready":"False" status (will retry)
	I1101 10:21:21.974953  760328 node_ready.go:49] node "embed-certs-678014" is "Ready"
	I1101 10:21:21.974987  760328 node_ready.go:38] duration metric: took 41.003711997s for node "embed-certs-678014" to be "Ready" ...
	I1101 10:21:21.975003  760328 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:21:21.975060  760328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:21:21.988740  760328 api_server.go:72] duration metric: took 41.545628804s to wait for apiserver process to appear ...
	I1101 10:21:21.988769  760328 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:21:21.988792  760328 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 10:21:21.993160  760328 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 10:21:21.994200  760328 api_server.go:141] control plane version: v1.34.1
	I1101 10:21:21.994228  760328 api_server.go:131] duration metric: took 5.452772ms to wait for apiserver health ...
	I1101 10:21:21.994237  760328 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:21:21.997970  760328 system_pods.go:59] 8 kube-system pods found
	I1101 10:21:21.998026  760328 system_pods.go:61] "coredns-66bc5c9577-vlf7q" [6b08350a-b7d7-4564-8275-a42d7e42cae1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:21:21.998033  760328 system_pods.go:61] "etcd-embed-certs-678014" [480fb441-2b5d-4ba6-88b5-b0da9874249a] Running
	I1101 10:21:21.998040  760328 system_pods.go:61] "kindnet-fzb8b" [9afe6a1c-b603-4bff-80ea-a8acd9e143ff] Running
	I1101 10:21:21.998044  760328 system_pods.go:61] "kube-apiserver-embed-certs-678014" [22ebdf2d-b7af-400d-922f-33f9c1bd91d6] Running
	I1101 10:21:21.998052  760328 system_pods.go:61] "kube-controller-manager-embed-certs-678014" [0f91548c-eb8c-4bb2-8fba-2e9cbcbe487e] Running
	I1101 10:21:21.998056  760328 system_pods.go:61] "kube-proxy-tlw2d" [e2964bb1-7bfc-40ab-9ee9-8db9e09909ad] Running
	I1101 10:21:21.998062  760328 system_pods.go:61] "kube-scheduler-embed-certs-678014" [4a4b00f3-4e72-4d82-a783-1c866bf61006] Running
	I1101 10:21:21.998067  760328 system_pods.go:61] "storage-provisioner" [d8b98733-a837-48d1-aaee-f8d72b5e81f3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:21:21.998077  760328 system_pods.go:74] duration metric: took 3.833181ms to wait for pod list to return data ...
	I1101 10:21:21.998088  760328 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:21:22.000947  760328 default_sa.go:45] found service account: "default"
	I1101 10:21:22.000981  760328 default_sa.go:55] duration metric: took 2.881454ms for default service account to be created ...
	I1101 10:21:22.000995  760328 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:21:22.004329  760328 system_pods.go:86] 8 kube-system pods found
	I1101 10:21:22.004370  760328 system_pods.go:89] "coredns-66bc5c9577-vlf7q" [6b08350a-b7d7-4564-8275-a42d7e42cae1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:21:22.004378  760328 system_pods.go:89] "etcd-embed-certs-678014" [480fb441-2b5d-4ba6-88b5-b0da9874249a] Running
	I1101 10:21:22.004385  760328 system_pods.go:89] "kindnet-fzb8b" [9afe6a1c-b603-4bff-80ea-a8acd9e143ff] Running
	I1101 10:21:22.004389  760328 system_pods.go:89] "kube-apiserver-embed-certs-678014" [22ebdf2d-b7af-400d-922f-33f9c1bd91d6] Running
	I1101 10:21:22.004395  760328 system_pods.go:89] "kube-controller-manager-embed-certs-678014" [0f91548c-eb8c-4bb2-8fba-2e9cbcbe487e] Running
	I1101 10:21:22.004400  760328 system_pods.go:89] "kube-proxy-tlw2d" [e2964bb1-7bfc-40ab-9ee9-8db9e09909ad] Running
	I1101 10:21:22.004405  760328 system_pods.go:89] "kube-scheduler-embed-certs-678014" [4a4b00f3-4e72-4d82-a783-1c866bf61006] Running
	I1101 10:21:22.004412  760328 system_pods.go:89] "storage-provisioner" [d8b98733-a837-48d1-aaee-f8d72b5e81f3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:21:22.004447  760328 retry.go:31] will retry after 250.760964ms: missing components: kube-dns
	I1101 10:21:22.259487  760328 system_pods.go:86] 8 kube-system pods found
	I1101 10:21:22.259528  760328 system_pods.go:89] "coredns-66bc5c9577-vlf7q" [6b08350a-b7d7-4564-8275-a42d7e42cae1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:21:22.259537  760328 system_pods.go:89] "etcd-embed-certs-678014" [480fb441-2b5d-4ba6-88b5-b0da9874249a] Running
	I1101 10:21:22.259545  760328 system_pods.go:89] "kindnet-fzb8b" [9afe6a1c-b603-4bff-80ea-a8acd9e143ff] Running
	I1101 10:21:22.259550  760328 system_pods.go:89] "kube-apiserver-embed-certs-678014" [22ebdf2d-b7af-400d-922f-33f9c1bd91d6] Running
	I1101 10:21:22.259558  760328 system_pods.go:89] "kube-controller-manager-embed-certs-678014" [0f91548c-eb8c-4bb2-8fba-2e9cbcbe487e] Running
	I1101 10:21:22.259563  760328 system_pods.go:89] "kube-proxy-tlw2d" [e2964bb1-7bfc-40ab-9ee9-8db9e09909ad] Running
	I1101 10:21:22.259569  760328 system_pods.go:89] "kube-scheduler-embed-certs-678014" [4a4b00f3-4e72-4d82-a783-1c866bf61006] Running
	I1101 10:21:22.259577  760328 system_pods.go:89] "storage-provisioner" [d8b98733-a837-48d1-aaee-f8d72b5e81f3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:21:22.259600  760328 retry.go:31] will retry after 330.439627ms: missing components: kube-dns
	I1101 10:21:22.594734  760328 system_pods.go:86] 8 kube-system pods found
	I1101 10:21:22.594776  760328 system_pods.go:89] "coredns-66bc5c9577-vlf7q" [6b08350a-b7d7-4564-8275-a42d7e42cae1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:21:22.594784  760328 system_pods.go:89] "etcd-embed-certs-678014" [480fb441-2b5d-4ba6-88b5-b0da9874249a] Running
	I1101 10:21:22.594793  760328 system_pods.go:89] "kindnet-fzb8b" [9afe6a1c-b603-4bff-80ea-a8acd9e143ff] Running
	I1101 10:21:22.594798  760328 system_pods.go:89] "kube-apiserver-embed-certs-678014" [22ebdf2d-b7af-400d-922f-33f9c1bd91d6] Running
	I1101 10:21:22.594803  760328 system_pods.go:89] "kube-controller-manager-embed-certs-678014" [0f91548c-eb8c-4bb2-8fba-2e9cbcbe487e] Running
	I1101 10:21:22.594809  760328 system_pods.go:89] "kube-proxy-tlw2d" [e2964bb1-7bfc-40ab-9ee9-8db9e09909ad] Running
	I1101 10:21:22.594813  760328 system_pods.go:89] "kube-scheduler-embed-certs-678014" [4a4b00f3-4e72-4d82-a783-1c866bf61006] Running
	I1101 10:21:22.594820  760328 system_pods.go:89] "storage-provisioner" [d8b98733-a837-48d1-aaee-f8d72b5e81f3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:21:22.594873  760328 retry.go:31] will retry after 418.275628ms: missing components: kube-dns
	I1101 10:21:23.019474  760328 system_pods.go:86] 8 kube-system pods found
	I1101 10:21:23.019513  760328 system_pods.go:89] "coredns-66bc5c9577-vlf7q" [6b08350a-b7d7-4564-8275-a42d7e42cae1] Running
	I1101 10:21:23.019524  760328 system_pods.go:89] "etcd-embed-certs-678014" [480fb441-2b5d-4ba6-88b5-b0da9874249a] Running
	I1101 10:21:23.019531  760328 system_pods.go:89] "kindnet-fzb8b" [9afe6a1c-b603-4bff-80ea-a8acd9e143ff] Running
	I1101 10:21:23.019538  760328 system_pods.go:89] "kube-apiserver-embed-certs-678014" [22ebdf2d-b7af-400d-922f-33f9c1bd91d6] Running
	I1101 10:21:23.019545  760328 system_pods.go:89] "kube-controller-manager-embed-certs-678014" [0f91548c-eb8c-4bb2-8fba-2e9cbcbe487e] Running
	I1101 10:21:23.019550  760328 system_pods.go:89] "kube-proxy-tlw2d" [e2964bb1-7bfc-40ab-9ee9-8db9e09909ad] Running
	I1101 10:21:23.019555  760328 system_pods.go:89] "kube-scheduler-embed-certs-678014" [4a4b00f3-4e72-4d82-a783-1c866bf61006] Running
	I1101 10:21:23.019560  760328 system_pods.go:89] "storage-provisioner" [d8b98733-a837-48d1-aaee-f8d72b5e81f3] Running
	I1101 10:21:23.019570  760328 system_pods.go:126] duration metric: took 1.018568042s to wait for k8s-apps to be running ...
	I1101 10:21:23.019585  760328 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:21:23.019638  760328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:21:23.038124  760328 system_svc.go:56] duration metric: took 18.524247ms WaitForService to wait for kubelet
	I1101 10:21:23.038167  760328 kubeadm.go:587] duration metric: took 42.595073538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:21:23.038195  760328 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:21:23.043190  760328 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:21:23.043225  760328 node_conditions.go:123] node cpu capacity is 8
	I1101 10:21:23.043251  760328 node_conditions.go:105] duration metric: took 5.048649ms to run NodePressure ...
	I1101 10:21:23.043271  760328 start.go:242] waiting for startup goroutines ...
	I1101 10:21:23.043285  760328 start.go:247] waiting for cluster config update ...
	I1101 10:21:23.043301  760328 start.go:256] writing updated cluster config ...
	I1101 10:21:23.043664  760328 ssh_runner.go:195] Run: rm -f paused
	I1101 10:21:23.049353  760328 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:21:23.055726  760328 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vlf7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:21:23.063591  760328 pod_ready.go:94] pod "coredns-66bc5c9577-vlf7q" is "Ready"
	I1101 10:21:23.063630  760328 pod_ready.go:86] duration metric: took 7.869196ms for pod "coredns-66bc5c9577-vlf7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:21:23.067262  760328 pod_ready.go:83] waiting for pod "etcd-embed-certs-678014" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:21:23.074163  760328 pod_ready.go:94] pod "etcd-embed-certs-678014" is "Ready"
	I1101 10:21:23.074284  760328 pod_ready.go:86] duration metric: took 6.901804ms for pod "etcd-embed-certs-678014" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:21:23.077643  760328 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-678014" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:21:23.082820  760328 pod_ready.go:94] pod "kube-apiserver-embed-certs-678014" is "Ready"
	I1101 10:21:23.082874  760328 pod_ready.go:86] duration metric: took 5.200369ms for pod "kube-apiserver-embed-certs-678014" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:21:23.085302  760328 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-678014" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.381933757Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-kp445/POD" id=532d29e6-80b4-42ce-b7a5-59245600e4e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.382048024Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.383099232Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.383923403Z" level=info msg="Ran pod sandbox 5fba7305785d39d0b927243f57ab6f9f12aafcc171710bf66ba763dc47c744be with infra container: kube-system/kindnet-487js/POD" id=3e5be296-0fec-4768-bb1c-8eae0a28ed59 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.38545714Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1aa6a0ae-f655-49c1-98a9-3a0aff592185 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.385926491Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=532d29e6-80b4-42ce-b7a5-59245600e4e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.387159025Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d5aaeae0-4bde-4c16-849e-2f87bb51b7c5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.387668151Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.389136705Z" level=info msg="Creating container: kube-system/kindnet-487js/kindnet-cni" id=29643994-256c-41c5-b40c-ec1922e20ce7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.389285697Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.389449585Z" level=info msg="Ran pod sandbox a406aba54a6494314781bb627ef72fbc4c4888adc4e2549a01aa0f039da53d86 with infra container: kube-system/kube-proxy-kp445/POD" id=532d29e6-80b4-42ce-b7a5-59245600e4e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.39178023Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b8f72940-6128-4cf5-91f2-c026b3300ecf name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.393733673Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e10bdfd6-207c-46e5-a8d0-bf39cdaa6afe name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.394262215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.394774555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.395178172Z" level=info msg="Creating container: kube-system/kube-proxy-kp445/kube-proxy" id=88c0e789-eca3-4d18-920a-49656529dd8e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.395315377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.400042574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.400729315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.429088887Z" level=info msg="Created container 5f81dc39338faa288b5e42addd10e7486b7d4b85f61aa8fe4077cf9561e1a729: kube-system/kindnet-487js/kindnet-cni" id=29643994-256c-41c5-b40c-ec1922e20ce7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.430058067Z" level=info msg="Starting container: 5f81dc39338faa288b5e42addd10e7486b7d4b85f61aa8fe4077cf9561e1a729" id=0f9124bd-d029-4db4-b550-488e47e8fab1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.432468774Z" level=info msg="Started container" PID=1039 containerID=5f81dc39338faa288b5e42addd10e7486b7d4b85f61aa8fe4077cf9561e1a729 description=kube-system/kindnet-487js/kindnet-cni id=0f9124bd-d029-4db4-b550-488e47e8fab1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fba7305785d39d0b927243f57ab6f9f12aafcc171710bf66ba763dc47c744be
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.433974321Z" level=info msg="Created container 3b70b9eba589fbc2df8137342ab90c0de139b42dcd0cdba712add248e0a957fe: kube-system/kube-proxy-kp445/kube-proxy" id=88c0e789-eca3-4d18-920a-49656529dd8e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.434814091Z" level=info msg="Starting container: 3b70b9eba589fbc2df8137342ab90c0de139b42dcd0cdba712add248e0a957fe" id=b8539420-64d8-4db8-8ea3-00703811d4d0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:21:17 newest-cni-006653 crio[520]: time="2025-11-01T10:21:17.438303627Z" level=info msg="Started container" PID=1040 containerID=3b70b9eba589fbc2df8137342ab90c0de139b42dcd0cdba712add248e0a957fe description=kube-system/kube-proxy-kp445/kube-proxy id=b8539420-64d8-4db8-8ea3-00703811d4d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a406aba54a6494314781bb627ef72fbc4c4888adc4e2549a01aa0f039da53d86
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3b70b9eba589f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   a406aba54a649       kube-proxy-kp445                            kube-system
	5f81dc39338fa       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   5fba7305785d3       kindnet-487js                               kube-system
	7c09ddecdeca4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   322d5cedad390       etcd-newest-cni-006653                      kube-system
	922955453c813       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   95709fb3fb185       kube-apiserver-newest-cni-006653            kube-system
	c7f1e1f3c53e6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   986e29b0d9a81       kube-controller-manager-newest-cni-006653   kube-system
	49e471af6c5f0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   b7ad92f5761e3       kube-scheduler-newest-cni-006653            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-006653
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-006653
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=newest-cni-006653
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_20_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:20:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-006653
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:21:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:21:16 +0000   Sat, 01 Nov 2025 10:20:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:21:16 +0000   Sat, 01 Nov 2025 10:20:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:21:16 +0000   Sat, 01 Nov 2025 10:20:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 10:21:16 +0000   Sat, 01 Nov 2025 10:20:51 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-006653
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                e2a07147-2430-4ed4-a07b-b804bc96d00e
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-006653                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-487js                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-006653             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-newest-cni-006653    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-kp445                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-006653             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node newest-cni-006653 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node newest-cni-006653 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node newest-cni-006653 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node newest-cni-006653 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node newest-cni-006653 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node newest-cni-006653 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node newest-cni-006653 event: Registered Node newest-cni-006653 in Controller
	  Normal  RegisteredNode           5s                 node-controller  Node newest-cni-006653 event: Registered Node newest-cni-006653 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [7c09ddecdeca46ff3ec1552a8c119fc453d012084c77937d37039c7713b8515b] <==
	{"level":"warn","ts":"2025-11-01T10:21:15.614787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.623093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.633306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.649187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.660200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.672787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.684017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.692536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.703448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.712301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.720275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.728446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.736275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.744812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.753164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.762926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.771447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.780703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.788644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.797503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.806295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.832521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.840262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.847732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:15.912694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57438","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:21:24 up  3:03,  0 user,  load average: 3.96, 3.65, 2.88
	Linux newest-cni-006653 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5f81dc39338faa288b5e42addd10e7486b7d4b85f61aa8fe4077cf9561e1a729] <==
	I1101 10:21:17.694283       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:21:17.694696       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:21:17.694876       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:21:17.694897       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:21:17.694932       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:21:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:21:17.897512       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:21:17.897541       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:21:17.897551       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:21:17.897670       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:21:18.490531       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:21:18.490896       1 metrics.go:72] Registering metrics
	I1101 10:21:18.490990       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [922955453c81342bf231488bc1c4788ba0de975b4453762ada023b741185a144] <==
	I1101 10:21:16.520550       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:21:16.521500       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:21:16.521510       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:21:16.521517       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:21:16.521524       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:21:16.532694       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 10:21:16.535678       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:21:16.537652       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:21:16.542702       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:21:16.549261       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:21:16.549295       1 policy_source.go:240] refreshing policies
	I1101 10:21:16.557187       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:21:16.905959       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:21:16.944989       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:21:16.976512       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:21:16.990442       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:21:17.000465       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:21:17.052596       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.69.172"}
	I1101 10:21:17.066018       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.74.134"}
	I1101 10:21:17.423716       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:21:19.724440       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:21:19.724476       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:21:19.777090       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:21:19.926147       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:21:19.926147       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c7f1e1f3c53e69773b4e36a83142cc7f8552cca4f888399d85ba1875b5ebf29f] <==
	I1101 10:21:19.414648       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:21:19.417933       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:21:19.420070       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:21:19.421165       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:21:19.421188       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:21:19.421199       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:21:19.421500       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:21:19.421619       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:21:19.421995       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:21:19.423063       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:21:19.425301       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:21:19.426478       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:21:19.426507       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:21:19.426541       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:21:19.426592       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:21:19.426605       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:21:19.426614       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:21:19.428734       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:21:19.428866       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:21:19.438209       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:21:19.438241       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:21:19.438256       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:21:19.445552       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:21:19.446765       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:21:19.447830       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [3b70b9eba589fbc2df8137342ab90c0de139b42dcd0cdba712add248e0a957fe] <==
	I1101 10:21:17.481911       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:21:17.548447       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:21:17.649131       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:21:17.649198       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:21:17.649327       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:21:17.677318       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:21:17.677381       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:21:17.684695       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:21:17.685195       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:21:17.685241       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:21:17.686749       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:21:17.686867       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:21:17.686938       1 config.go:309] "Starting node config controller"
	I1101 10:21:17.686916       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:21:17.686953       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:21:17.686827       1 config.go:200] "Starting service config controller"
	I1101 10:21:17.686977       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:21:17.686945       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:21:17.787807       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:21:17.787821       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:21:17.787884       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:21:17.787897       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [49e471af6c5f092029c6717bae1e37da0b4381d85dfad7b5da552c19d207269c] <==
	I1101 10:21:15.383195       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:21:16.471614       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:21:16.471668       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:21:16.471682       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:21:16.471692       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:21:16.505337       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:21:16.505463       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:21:16.510395       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:21:16.510767       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:21:16.512119       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:21:16.513122       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:21:16.612697       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: E1101 10:21:16.112109     670 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-006653\" not found" node="newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.470954     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: E1101 10:21:16.602309     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-006653\" already exists" pod="kube-system/etcd-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.602351     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: E1101 10:21:16.613171     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-006653\" already exists" pod="kube-system/kube-apiserver-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.613227     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: E1101 10:21:16.623113     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-006653\" already exists" pod="kube-system/kube-controller-manager-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.623159     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: E1101 10:21:16.632424     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-006653\" already exists" pod="kube-system/kube-scheduler-newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.652138     670 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.652298     670 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-006653"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.652353     670 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 10:21:16 newest-cni-006653 kubelet[670]: I1101 10:21:16.654342     670 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.067771     670 apiserver.go:52] "Watching apiserver"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.070399     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-006653"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: E1101 10:21:17.080738     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-006653\" already exists" pod="kube-system/kube-controller-manager-newest-cni-006653"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.170121     670 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.262393     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0400e397-aa86-4a6e-976e-ff1a3844727b-cni-cfg\") pod \"kindnet-487js\" (UID: \"0400e397-aa86-4a6e-976e-ff1a3844727b\") " pod="kube-system/kindnet-487js"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.262450     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0400e397-aa86-4a6e-976e-ff1a3844727b-xtables-lock\") pod \"kindnet-487js\" (UID: \"0400e397-aa86-4a6e-976e-ff1a3844727b\") " pod="kube-system/kindnet-487js"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.262651     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0400e397-aa86-4a6e-976e-ff1a3844727b-lib-modules\") pod \"kindnet-487js\" (UID: \"0400e397-aa86-4a6e-976e-ff1a3844727b\") " pod="kube-system/kindnet-487js"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.262713     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b-lib-modules\") pod \"kube-proxy-kp445\" (UID: \"ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b\") " pod="kube-system/kube-proxy-kp445"
	Nov 01 10:21:17 newest-cni-006653 kubelet[670]: I1101 10:21:17.262743     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b-xtables-lock\") pod \"kube-proxy-kp445\" (UID: \"ff20790d-d7e7-4e6c-a9e9-d9aae7e30e7b\") " pod="kube-system/kube-proxy-kp445"
	Nov 01 10:21:19 newest-cni-006653 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:21:19 newest-cni-006653 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:21:19 newest-cni-006653 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-006653 -n newest-cni-006653
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-006653 -n newest-cni-006653: exit status 2 (375.009337ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-006653 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gn6zx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-564f7 kubernetes-dashboard-855c9754f9-zlwtr
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-006653 describe pod coredns-66bc5c9577-gn6zx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-564f7 kubernetes-dashboard-855c9754f9-zlwtr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-006653 describe pod coredns-66bc5c9577-gn6zx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-564f7 kubernetes-dashboard-855c9754f9-zlwtr: exit status 1 (76.540502ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gn6zx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-564f7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-zlwtr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-006653 describe pod coredns-66bc5c9577-gn6zx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-564f7 kubernetes-dashboard-855c9754f9-zlwtr: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-678014 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-678014 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (283.889416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:21:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-678014 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-678014 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-678014 describe deploy/metrics-server -n kube-system: exit status 1 (64.973664ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-678014 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-678014
helpers_test.go:243: (dbg) docker inspect embed-certs-678014:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8",
	        "Created": "2025-11-01T10:20:19.10525333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 762343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:20:19.141484618Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8/hosts",
	        "LogPath": "/var/lib/docker/containers/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8-json.log",
	        "Name": "/embed-certs-678014",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-678014:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-678014",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8",
	                "LowerDir": "/var/lib/docker/overlay2/fa1b4666a9401b2b8455588bf0fc7ae32d80d9a94c693ed716d98b8d8b3eeed4-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa1b4666a9401b2b8455588bf0fc7ae32d80d9a94c693ed716d98b8d8b3eeed4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa1b4666a9401b2b8455588bf0fc7ae32d80d9a94c693ed716d98b8d8b3eeed4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa1b4666a9401b2b8455588bf0fc7ae32d80d9a94c693ed716d98b8d8b3eeed4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-678014",
	                "Source": "/var/lib/docker/volumes/embed-certs-678014/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-678014",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-678014",
	                "name.minikube.sigs.k8s.io": "embed-certs-678014",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ece34e9eea3af8e5f9de5d74660eef5ba382729514275a49b589d7feea99ef65",
	            "SandboxKey": "/var/run/docker/netns/ece34e9eea3a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33194"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33197"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33196"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-678014": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:c7:e3:b0:8b:b4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "59c3492c15198878d11d0583248059a9226a90667cc7e5ff7108cce34fc74e86",
	                    "EndpointID": "0c6acc140fd71d45cfb9ebd163bc19a5186ec9e6f1650cdd439b551c25c4b80a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-678014",
	                        "7254f01179da"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-678014 -n embed-certs-678014
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-678014 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-678014 logs -n 25: (1.125478804s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-556573 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ image   │ no-preload-680879 image list --format=json                                                                                                                                                                                                    │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ pause   │ -p no-preload-680879 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │                     │
	│ delete  │ -p old-k8s-version-556573                                                                                                                                                                                                                     │ old-k8s-version-556573       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p embed-certs-678014 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-678014           │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:21 UTC │
	│ delete  │ -p no-preload-680879                                                                                                                                                                                                                          │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p no-preload-680879                                                                                                                                                                                                                          │ no-preload-680879            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p disable-driver-mounts-083568                                                                                                                                                                                                               │ disable-driver-mounts-083568 │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p default-k8s-diff-port-535119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:21 UTC │
	│ start   │ -p cert-expiration-577441 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-577441       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ delete  │ -p cert-expiration-577441                                                                                                                                                                                                                     │ cert-expiration-577441       │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:20 UTC │
	│ start   │ -p newest-cni-006653 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:20 UTC │ 01 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-006653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	│ stop    │ -p newest-cni-006653 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ addons  │ enable dashboard -p newest-cni-006653 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ start   │ -p newest-cni-006653 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-535119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-535119 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	│ image   │ newest-cni-006653 image list --format=json                                                                                                                                                                                                    │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ pause   │ -p newest-cni-006653 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	│ delete  │ -p newest-cni-006653                                                                                                                                                                                                                          │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ delete  │ -p newest-cni-006653                                                                                                                                                                                                                          │ newest-cni-006653            │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:21 UTC │
	│ start   │ -p auto-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-678014 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-678014           │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:21:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:21:28.050406  781607 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:21:28.050715  781607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:21:28.050727  781607 out.go:374] Setting ErrFile to fd 2...
	I1101 10:21:28.050734  781607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:21:28.050992  781607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:21:28.051520  781607 out.go:368] Setting JSON to false
	I1101 10:21:28.052736  781607 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11025,"bootTime":1761981463,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:21:28.052803  781607 start.go:143] virtualization: kvm guest
	I1101 10:21:28.054524  781607 out.go:179] * [auto-456743] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:21:28.055576  781607 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:21:28.055590  781607 notify.go:221] Checking for updates...
	I1101 10:21:28.057494  781607 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:21:28.058526  781607 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:21:28.059451  781607 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:21:28.060373  781607 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:21:28.061301  781607 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:21:28.062731  781607 config.go:182] Loaded profile config "default-k8s-diff-port-535119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:28.062854  781607 config.go:182] Loaded profile config "embed-certs-678014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:28.062948  781607 config.go:182] Loaded profile config "kubernetes-upgrade-949166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:28.063056  781607 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:21:28.089031  781607 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:21:28.089130  781607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:21:28.149380  781607 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 10:21:28.1388595 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:21:28.149564  781607 docker.go:319] overlay module found
	I1101 10:21:28.151222  781607 out.go:179] * Using the docker driver based on user configuration
	I1101 10:21:28.152279  781607 start.go:309] selected driver: docker
	I1101 10:21:28.152299  781607 start.go:930] validating driver "docker" against <nil>
	I1101 10:21:28.152318  781607 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:21:28.153280  781607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:21:28.211326  781607 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 10:21:28.201664657 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:21:28.211613  781607 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:21:28.211917  781607 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:21:28.213403  781607 out.go:179] * Using Docker driver with root privileges
	I1101 10:21:28.214340  781607 cni.go:84] Creating CNI manager for ""
	I1101 10:21:28.214431  781607 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:21:28.214446  781607 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:21:28.214563  781607 start.go:353] cluster config:
	{Name:auto-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-456743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1101 10:21:28.215660  781607 out.go:179] * Starting "auto-456743" primary control-plane node in "auto-456743" cluster
	I1101 10:21:28.216523  781607 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:21:28.217506  781607 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:21:28.218351  781607 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:21:28.218400  781607 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:21:28.218410  781607 cache.go:59] Caching tarball of preloaded images
	I1101 10:21:28.218447  781607 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:21:28.218499  781607 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:21:28.218510  781607 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:21:28.218612  781607 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/auto-456743/config.json ...
	I1101 10:21:28.218632  781607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/auto-456743/config.json: {Name:mk0673f94e831188975632b4fadf8803fd81dfa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:21:28.240134  781607 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:21:28.240155  781607 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:21:28.240172  781607 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:21:28.240214  781607 start.go:360] acquireMachinesLock for auto-456743: {Name:mkda8640a4826e4daec7ca1450524eebd9817571 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:21:28.240342  781607 start.go:364] duration metric: took 106.458µs to acquireMachinesLock for "auto-456743"
	I1101 10:21:28.240375  781607 start.go:93] Provisioning new machine with config: &{Name:auto-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-456743 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:21:28.240456  781607 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:21:26.015715  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:26.016209  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:26.016283  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:26.016350  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:26.046268  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:26.046301  734517 cri.go:89] found id: ""
	I1101 10:21:26.046314  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:26.046369  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:26.050622  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:26.050697  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:26.079715  734517 cri.go:89] found id: ""
	I1101 10:21:26.079740  734517 logs.go:282] 0 containers: []
	W1101 10:21:26.079749  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:26.079756  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:26.079804  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:26.108077  734517 cri.go:89] found id: ""
	I1101 10:21:26.108113  734517 logs.go:282] 0 containers: []
	W1101 10:21:26.108122  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:26.108129  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:26.108184  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:26.137539  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:26.137563  734517 cri.go:89] found id: ""
	I1101 10:21:26.137574  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:26.137623  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:26.141803  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:26.141904  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:26.171563  734517 cri.go:89] found id: ""
	I1101 10:21:26.171589  734517 logs.go:282] 0 containers: []
	W1101 10:21:26.171598  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:26.171604  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:26.171666  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:26.199592  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:26.199621  734517 cri.go:89] found id: ""
	I1101 10:21:26.199631  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:26.199680  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:26.204082  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:26.204159  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:26.232341  734517 cri.go:89] found id: ""
	I1101 10:21:26.232367  734517 logs.go:282] 0 containers: []
	W1101 10:21:26.232378  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:26.232387  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:26.232451  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:21:26.262489  734517 cri.go:89] found id: ""
	I1101 10:21:26.262518  734517 logs.go:282] 0 containers: []
	W1101 10:21:26.262527  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:21:26.262537  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:21:26.262549  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:21:26.294879  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:21:26.294912  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:21:26.388055  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:21:26.388096  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:21:26.408971  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:21:26.409012  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:21:26.469730  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:21:26.469756  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:21:26.469776  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:26.503286  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:21:26.503324  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:26.557695  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:21:26.557739  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:26.588337  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:21:26.588367  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:21:29.146353  734517 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:21:29.146928  734517 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1101 10:21:29.147012  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:21:29.147081  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:21:29.180984  734517 cri.go:89] found id: "e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:29.181006  734517 cri.go:89] found id: ""
	I1101 10:21:29.181014  734517 logs.go:282] 1 containers: [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf]
	I1101 10:21:29.181067  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:29.185638  734517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:21:29.185724  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:21:29.215620  734517 cri.go:89] found id: ""
	I1101 10:21:29.215645  734517 logs.go:282] 0 containers: []
	W1101 10:21:29.215653  734517 logs.go:284] No container was found matching "etcd"
	I1101 10:21:29.215659  734517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:21:29.215716  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:21:29.246761  734517 cri.go:89] found id: ""
	I1101 10:21:29.246793  734517 logs.go:282] 0 containers: []
	W1101 10:21:29.246804  734517 logs.go:284] No container was found matching "coredns"
	I1101 10:21:29.246813  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:21:29.246909  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:21:29.277976  734517 cri.go:89] found id: "4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:29.278010  734517 cri.go:89] found id: ""
	I1101 10:21:29.278024  734517 logs.go:282] 1 containers: [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385]
	I1101 10:21:29.278088  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:29.282874  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:21:29.282950  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:21:29.315933  734517 cri.go:89] found id: ""
	I1101 10:21:29.315964  734517 logs.go:282] 0 containers: []
	W1101 10:21:29.315975  734517 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:21:29.315984  734517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:21:29.316051  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:21:29.348080  734517 cri.go:89] found id: "495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:29.348103  734517 cri.go:89] found id: ""
	I1101 10:21:29.348112  734517 logs.go:282] 1 containers: [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce]
	I1101 10:21:29.348163  734517 ssh_runner.go:195] Run: which crictl
	I1101 10:21:29.352758  734517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:21:29.352831  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:21:29.386046  734517 cri.go:89] found id: ""
	I1101 10:21:29.386077  734517 logs.go:282] 0 containers: []
	W1101 10:21:29.386086  734517 logs.go:284] No container was found matching "kindnet"
	I1101 10:21:29.386096  734517 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:21:29.386162  734517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:21:29.417518  734517 cri.go:89] found id: ""
	I1101 10:21:29.417557  734517 logs.go:282] 0 containers: []
	W1101 10:21:29.417571  734517 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:21:29.417586  734517 logs.go:123] Gathering logs for kube-scheduler [4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385] ...
	I1101 10:21:29.417601  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4d19beaa95738ecaf94b74f253e2cd1affd0e478f00885368db05f81c6ab2385"
	I1101 10:21:29.484003  734517 logs.go:123] Gathering logs for kube-controller-manager [495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce] ...
	I1101 10:21:29.484056  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 495251f1686bccb76c1fb23a726969a1e27345d5a770f2754743b480c67df3ce"
	I1101 10:21:29.515244  734517 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:21:29.515275  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:21:29.591349  734517 logs.go:123] Gathering logs for container status ...
	I1101 10:21:29.591402  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:21:29.627511  734517 logs.go:123] Gathering logs for kubelet ...
	I1101 10:21:29.627558  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:21:29.734679  734517 logs.go:123] Gathering logs for dmesg ...
	I1101 10:21:29.734725  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:21:29.755641  734517 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:21:29.755679  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:21:29.817348  734517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:21:29.817375  734517 logs.go:123] Gathering logs for kube-apiserver [e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf] ...
	I1101 10:21:29.817391  734517 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e44c233cda76f5e1f3abc8a29d8004d0cd77f212d25bb41901624604d5126caf"
	I1101 10:21:28.242620  781607 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:21:28.242832  781607 start.go:159] libmachine.API.Create for "auto-456743" (driver="docker")
	I1101 10:21:28.242887  781607 client.go:173] LocalClient.Create starting
	I1101 10:21:28.242953  781607 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem
	I1101 10:21:28.242989  781607 main.go:143] libmachine: Decoding PEM data...
	I1101 10:21:28.243005  781607 main.go:143] libmachine: Parsing certificate...
	I1101 10:21:28.243081  781607 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem
	I1101 10:21:28.243104  781607 main.go:143] libmachine: Decoding PEM data...
	I1101 10:21:28.243115  781607 main.go:143] libmachine: Parsing certificate...
	I1101 10:21:28.243441  781607 cli_runner.go:164] Run: docker network inspect auto-456743 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:21:28.260488  781607 cli_runner.go:211] docker network inspect auto-456743 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:21:28.260570  781607 network_create.go:284] running [docker network inspect auto-456743] to gather additional debugging logs...
	I1101 10:21:28.260621  781607 cli_runner.go:164] Run: docker network inspect auto-456743
	W1101 10:21:28.277706  781607 cli_runner.go:211] docker network inspect auto-456743 returned with exit code 1
	I1101 10:21:28.277738  781607 network_create.go:287] error running [docker network inspect auto-456743]: docker network inspect auto-456743: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-456743 not found
	I1101 10:21:28.277751  781607 network_create.go:289] output of [docker network inspect auto-456743]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-456743 not found
	
	** /stderr **
	I1101 10:21:28.277897  781607 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:21:28.296122  781607 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-db3052bfa0e7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:6a:af:78:80:46} reservation:<nil>}
	I1101 10:21:28.296993  781607 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-99d2741e1e59 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:99:ce:91:38:1c} reservation:<nil>}
	I1101 10:21:28.297807  781607 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a696a61d1319 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:f0:66:2c:aa:f2} reservation:<nil>}
	I1101 10:21:28.299335  781607 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ef9060}
	I1101 10:21:28.299399  781607 network_create.go:124] attempt to create docker network auto-456743 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 10:21:28.299508  781607 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-456743 auto-456743
	I1101 10:21:28.360603  781607 network_create.go:108] docker network auto-456743 192.168.76.0/24 created
	I1101 10:21:28.360643  781607 kic.go:121] calculated static IP "192.168.76.2" for the "auto-456743" container
	I1101 10:21:28.360727  781607 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:21:28.378564  781607 cli_runner.go:164] Run: docker volume create auto-456743 --label name.minikube.sigs.k8s.io=auto-456743 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:21:28.396572  781607 oci.go:103] Successfully created a docker volume auto-456743
	I1101 10:21:28.396671  781607 cli_runner.go:164] Run: docker run --rm --name auto-456743-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-456743 --entrypoint /usr/bin/test -v auto-456743:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:21:28.794483  781607 oci.go:107] Successfully prepared a docker volume auto-456743
	I1101 10:21:28.794532  781607 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:21:28.794561  781607 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:21:28.794644  781607 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-456743:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 01 10:21:21 embed-certs-678014 crio[780]: time="2025-11-01T10:21:21.905997606Z" level=info msg="Starting container: 1ddab98393a90d5b3858fb92f4d090885bbc26fa356a85d243c2ecd517289806" id=f967b503-a1e1-4f02-90c6-5faacd4b1b5f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:21:21 embed-certs-678014 crio[780]: time="2025-11-01T10:21:21.908140555Z" level=info msg="Started container" PID=1849 containerID=1ddab98393a90d5b3858fb92f4d090885bbc26fa356a85d243c2ecd517289806 description=kube-system/coredns-66bc5c9577-vlf7q/coredns id=f967b503-a1e1-4f02-90c6-5faacd4b1b5f name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d302a3f8f66e9386e358533f2724501036b76b9d5c7cb3fb79b737ff72cc9c2
	Nov 01 10:21:25 embed-certs-678014 crio[780]: time="2025-11-01T10:21:25.193698371Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e170f4c6-5330-42ad-a27a-c13053383fc5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:25 embed-certs-678014 crio[780]: time="2025-11-01T10:21:25.193801904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:25 embed-certs-678014 crio[780]: time="2025-11-01T10:21:25.199198083Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:972f43dd1fa9b1b9173b25a6583f047d892579e2dd401ca1b0a15b0b853cd834 UID:fcbbe122-495c-462f-913f-f3f2b1b23890 NetNS:/var/run/netns/4a7ee963-1e7e-4c4c-970c-7a40134f27f2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00015e7b0}] Aliases:map[]}"
	Nov 01 10:21:25 embed-certs-678014 crio[780]: time="2025-11-01T10:21:25.199234707Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:21:25 embed-certs-678014 crio[780]: time="2025-11-01T10:21:25.211619538Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:972f43dd1fa9b1b9173b25a6583f047d892579e2dd401ca1b0a15b0b853cd834 UID:fcbbe122-495c-462f-913f-f3f2b1b23890 NetNS:/var/run/netns/4a7ee963-1e7e-4c4c-970c-7a40134f27f2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00015e7b0}] Aliases:map[]}"
	Nov 01 10:21:25 embed-certs-678014 crio[780]: time="2025-11-01T10:21:25.211806473Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:21:25 embed-certs-678014 crio[780]: time="2025-11-01T10:21:25.213397698Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:21:25 embed-certs-678014 crio[780]: time="2025-11-01T10:21:25.214699654Z" level=info msg="Ran pod sandbox 972f43dd1fa9b1b9173b25a6583f047d892579e2dd401ca1b0a15b0b853cd834 with infra container: default/busybox/POD" id=e170f4c6-5330-42ad-a27a-c13053383fc5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:21:25 embed-certs-678014 crio[780]: time="2025-11-01T10:21:25.216348184Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=badb6550-3dd6-432d-9441-da9e02d30bd2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:25 embed-certs-678014 crio[780]: time="2025-11-01T10:21:25.216523769Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=badb6550-3dd6-432d-9441-da9e02d30bd2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:25 embed-certs-678014 crio[780]: time="2025-11-01T10:21:25.216573036Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=badb6550-3dd6-432d-9441-da9e02d30bd2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:25 embed-certs-678014 crio[780]: time="2025-11-01T10:21:25.217542208Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=74ea344b-dbc7-49b0-89f5-044b428a6634 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:21:25 embed-certs-678014 crio[780]: time="2025-11-01T10:21:25.223156564Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:21:27 embed-certs-678014 crio[780]: time="2025-11-01T10:21:27.414541209Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=74ea344b-dbc7-49b0-89f5-044b428a6634 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:21:27 embed-certs-678014 crio[780]: time="2025-11-01T10:21:27.415453695Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=97b39190-b38f-484f-af3b-3dd832cdfb25 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:27 embed-certs-678014 crio[780]: time="2025-11-01T10:21:27.416935451Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=94a6e38a-17ba-4659-bfdd-0373a29f5a84 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:21:27 embed-certs-678014 crio[780]: time="2025-11-01T10:21:27.420262735Z" level=info msg="Creating container: default/busybox/busybox" id=c3697090-2d87-4531-886c-9cced11307b6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:27 embed-certs-678014 crio[780]: time="2025-11-01T10:21:27.420411686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:27 embed-certs-678014 crio[780]: time="2025-11-01T10:21:27.425488123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:27 embed-certs-678014 crio[780]: time="2025-11-01T10:21:27.426162171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:21:27 embed-certs-678014 crio[780]: time="2025-11-01T10:21:27.490085596Z" level=info msg="Created container 3d8bf704d18779e7df2f59da32f7a8dd4c26e2b0d5e3fb72ac2a84e012914985: default/busybox/busybox" id=c3697090-2d87-4531-886c-9cced11307b6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:21:27 embed-certs-678014 crio[780]: time="2025-11-01T10:21:27.490984622Z" level=info msg="Starting container: 3d8bf704d18779e7df2f59da32f7a8dd4c26e2b0d5e3fb72ac2a84e012914985" id=032ff4fe-f8cb-4246-80c8-e7e1760796af name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:21:27 embed-certs-678014 crio[780]: time="2025-11-01T10:21:27.493538359Z" level=info msg="Started container" PID=1927 containerID=3d8bf704d18779e7df2f59da32f7a8dd4c26e2b0d5e3fb72ac2a84e012914985 description=default/busybox/busybox id=032ff4fe-f8cb-4246-80c8-e7e1760796af name=/runtime.v1.RuntimeService/StartContainer sandboxID=972f43dd1fa9b1b9173b25a6583f047d892579e2dd401ca1b0a15b0b853cd834
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	3d8bf704d1877       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago        Running             busybox                   0                   972f43dd1fa9b       busybox                                      default
	1ddab98393a90       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago       Running             coredns                   0                   0d302a3f8f66e       coredns-66bc5c9577-vlf7q                     kube-system
	7306b15dd9102       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago       Running             storage-provisioner       0                   2ed911f94242f       storage-provisioner                          kube-system
	6321c7839b028       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      54 seconds ago       Running             kube-proxy                0                   7ed62595f18f7       kube-proxy-tlw2d                             kube-system
	6442b9e03887e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      54 seconds ago       Running             kindnet-cni               0                   99c6b499b15ec       kindnet-fzb8b                                kube-system
	5dc0049b33918       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   e73c5cc66fbf1       kube-apiserver-embed-certs-678014            kube-system
	e6a1fd7f8e149       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   1c1b247be84a7       kube-scheduler-embed-certs-678014            kube-system
	6e06f71bbeb31       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   8b04bcb3fb786       kube-controller-manager-embed-certs-678014   kube-system
	1ac4f904f3a01       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   e2407846b5222       etcd-embed-certs-678014                      kube-system
	
	
	==> coredns [1ddab98393a90d5b3858fb92f4d090885bbc26fa356a85d243c2ecd517289806] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34447 - 38238 "HINFO IN 7331320717262475685.8344704442817893595. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025840434s
	
	
	==> describe nodes <==
	Name:               embed-certs-678014
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-678014
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=embed-certs-678014
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_20_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:20:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-678014
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:21:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:21:21 +0000   Sat, 01 Nov 2025 10:20:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:21:21 +0000   Sat, 01 Nov 2025 10:20:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:21:21 +0000   Sat, 01 Nov 2025 10:20:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:21:21 +0000   Sat, 01 Nov 2025 10:21:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-678014
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                03d8f849-7655-423d-8ed7-89c54dfab59c
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-vlf7q                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     55s
	  kube-system                 etcd-embed-certs-678014                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         61s
	  kube-system                 kindnet-fzb8b                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-678014             250m (3%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-678014    200m (2%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-tlw2d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-678014             100m (1%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 54s   kube-proxy       
	  Normal  Starting                 61s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s   kubelet          Node embed-certs-678014 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s   kubelet          Node embed-certs-678014 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s   kubelet          Node embed-certs-678014 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s   node-controller  Node embed-certs-678014 event: Registered Node embed-certs-678014 in Controller
	  Normal  NodeReady                14s   kubelet          Node embed-certs-678014 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [1ac4f904f3a01aa60bc655ef774d84e1fea7dda76d54319e71a34cd0a4be461e] <==
	{"level":"warn","ts":"2025-11-01T10:20:40.427126Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"277.375307ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/view\" limit:1 ","response":"range_response_count:1 size:2208"}
	{"level":"warn","ts":"2025-11-01T10:20:40.427190Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.585884ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765876349171007 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-tlw2d\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-tlw2d\" value_size:3317 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:20:40.427218Z","caller":"traceutil/trace.go:172","msg":"trace[746073612] range","detail":"{range_begin:/registry/clusterroles/view; range_end:; response_count:1; response_revision:316; }","duration":"277.482684ms","start":"2025-11-01T10:20:40.149716Z","end":"2025-11-01T10:20:40.427199Z","steps":["trace[746073612] 'agreement among raft nodes before linearized reading'  (duration: 142.808902ms)","trace[746073612] 'range keys from in-memory index tree'  (duration: 134.513191ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:20:40.427377Z","caller":"traceutil/trace.go:172","msg":"trace[831505696] transaction","detail":"{read_only:false; response_revision:318; number_of_response:1; }","duration":"280.14031ms","start":"2025-11-01T10:20:40.147218Z","end":"2025-11-01T10:20:40.427359Z","steps":["trace[831505696] 'process raft request'  (duration: 280.039475ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:20:40.427478Z","caller":"traceutil/trace.go:172","msg":"trace[2052587682] linearizableReadLoop","detail":"{readStateIndex:331; appliedIndex:329; }","duration":"134.912861ms","start":"2025-11-01T10:20:40.292530Z","end":"2025-11-01T10:20:40.427443Z","steps":["trace[2052587682] 'read index received'  (duration: 19.520952ms)","trace[2052587682] 'applied index is now lower than readState.Index'  (duration: 115.390478ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:20:40.427491Z","caller":"traceutil/trace.go:172","msg":"trace[527226067] transaction","detail":"{read_only:false; response_revision:321; number_of_response:1; }","duration":"275.852037ms","start":"2025-11-01T10:20:40.151628Z","end":"2025-11-01T10:20:40.427480Z","steps":["trace[527226067] 'process raft request'  (duration: 275.743163ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:20:40.427565Z","caller":"traceutil/trace.go:172","msg":"trace[1878568111] transaction","detail":"{read_only:false; response_revision:317; number_of_response:1; }","duration":"281.234314ms","start":"2025-11-01T10:20:40.146320Z","end":"2025-11-01T10:20:40.427554Z","steps":["trace[1878568111] 'process raft request'  (duration: 146.239346ms)","trace[1878568111] 'compare'  (duration: 134.461919ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:20:40.427577Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.103225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:20:40.427607Z","caller":"traceutil/trace.go:172","msg":"trace[466593160] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:321; }","duration":"186.138698ms","start":"2025-11-01T10:20:40.241460Z","end":"2025-11-01T10:20:40.427599Z","steps":["trace[466593160] 'agreement among raft nodes before linearized reading'  (duration: 186.071017ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:20:40.427644Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"220.643655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-11-01T10:20:40.427645Z","caller":"traceutil/trace.go:172","msg":"trace[2138476253] transaction","detail":"{read_only:false; response_revision:319; number_of_response:1; }","duration":"278.323365ms","start":"2025-11-01T10:20:40.149309Z","end":"2025-11-01T10:20:40.427632Z","steps":["trace[2138476253] 'process raft request'  (duration: 277.986168ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:20:40.427667Z","caller":"traceutil/trace.go:172","msg":"trace[1477008336] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:321; }","duration":"220.67435ms","start":"2025-11-01T10:20:40.206986Z","end":"2025-11-01T10:20:40.427661Z","steps":["trace[1477008336] 'agreement among raft nodes before linearized reading'  (duration: 220.562318ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:20:40.427783Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"180.654344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" limit:1 ","response":"range_response_count:1 size:197"}
	{"level":"info","ts":"2025-11-01T10:20:40.427811Z","caller":"traceutil/trace.go:172","msg":"trace[439793912] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:321; }","duration":"180.683018ms","start":"2025-11-01T10:20:40.247118Z","end":"2025-11-01T10:20:40.427801Z","steps":["trace[439793912] 'agreement among raft nodes before linearized reading'  (duration: 180.596982ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:20:40.427888Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"230.997074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-01T10:20:40.427915Z","caller":"traceutil/trace.go:172","msg":"trace[314202388] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:321; }","duration":"231.028353ms","start":"2025-11-01T10:20:40.196879Z","end":"2025-11-01T10:20:40.427908Z","steps":["trace[314202388] 'agreement among raft nodes before linearized reading'  (duration: 230.899991ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:20:40.427921Z","caller":"traceutil/trace.go:172","msg":"trace[1058485648] transaction","detail":"{read_only:false; response_revision:320; number_of_response:1; }","duration":"278.258733ms","start":"2025-11-01T10:20:40.149649Z","end":"2025-11-01T10:20:40.427908Z","steps":["trace[1058485648] 'process raft request'  (duration: 277.678318ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:20:40.427951Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.042759ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" limit:1 ","response":"range_response_count:1 size:193"}
	{"level":"info","ts":"2025-11-01T10:20:40.427990Z","caller":"traceutil/trace.go:172","msg":"trace[187857346] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:321; }","duration":"131.074459ms","start":"2025-11-01T10:20:40.296896Z","end":"2025-11-01T10:20:40.427971Z","steps":["trace[187857346] 'agreement among raft nodes before linearized reading'  (duration: 130.985236ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:20:40.428146Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"276.10955ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" limit:1 ","response":"range_response_count:1 size:370"}
	{"level":"info","ts":"2025-11-01T10:20:40.428173Z","caller":"traceutil/trace.go:172","msg":"trace[1962364750] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; response_count:1; response_revision:321; }","duration":"276.140488ms","start":"2025-11-01T10:20:40.152024Z","end":"2025-11-01T10:20:40.428165Z","steps":["trace[1962364750] 'agreement among raft nodes before linearized reading'  (duration: 276.053215ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:21:32.141178Z","caller":"traceutil/trace.go:172","msg":"trace[430452685] linearizableReadLoop","detail":"{readStateIndex:472; appliedIndex:472; }","duration":"124.294139ms","start":"2025-11-01T10:21:32.016861Z","end":"2025-11-01T10:21:32.141156Z","steps":["trace[430452685] 'read index received'  (duration: 124.283741ms)","trace[430452685] 'applied index is now lower than readState.Index'  (duration: 5.757µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:21:32.141315Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.432834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:21:32.141333Z","caller":"traceutil/trace.go:172","msg":"trace[399929980] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:448; }","duration":"124.474371ms","start":"2025-11-01T10:21:32.016854Z","end":"2025-11-01T10:21:32.141328Z","steps":["trace[399929980] 'agreement among raft nodes before linearized reading'  (duration: 124.400795ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:21:32.141404Z","caller":"traceutil/trace.go:172","msg":"trace[1176413940] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"161.975576ms","start":"2025-11-01T10:21:31.979415Z","end":"2025-11-01T10:21:32.141390Z","steps":["trace[1176413940] 'process raft request'  (duration: 161.827313ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:21:35 up  3:03,  0 user,  load average: 3.97, 3.67, 2.89
	Linux embed-certs-678014 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6442b9e03887eff53765009c96dda2a9a7cf5f70293ef212c9a1ab542d92df75] <==
	I1101 10:20:41.094624       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:20:41.095632       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 10:20:41.095790       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:20:41.095821       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:20:41.095905       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:20:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:20:41.389632       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:20:41.389669       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:20:41.389686       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:20:41.389869       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:21:11.390499       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:21:11.390499       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:21:11.390495       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:21:11.390531       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:21:12.890811       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:21:12.890928       1 metrics.go:72] Registering metrics
	I1101 10:21:12.891059       1 controller.go:711] "Syncing nftables rules"
	I1101 10:21:21.389935       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:21:21.389997       1 main.go:301] handling current node
	I1101 10:21:31.389930       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:21:31.389993       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5dc0049b33918fba583c06f87d9cd85ea90d0fb5f61d6cfe1ed74cffdf1007a5] <==
	I1101 10:20:32.266975       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:20:32.274218       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:20:32.274647       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:20:32.274831       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:20:32.279796       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:20:32.279883       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:20:32.280052       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:20:33.171428       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:20:33.178082       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:20:33.178101       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:20:33.654359       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:20:33.690320       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:20:33.777226       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:20:33.786406       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1101 10:20:33.787598       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:20:33.792030       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:20:34.203752       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:20:34.600887       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:20:34.614307       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:20:34.622543       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:20:40.075681       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1101 10:20:40.141164       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:20:40.150064       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:20:40.436364       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1101 10:21:34.018296       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:37846: use of closed network connection
	
	
	==> kube-controller-manager [6e06f71bbeb310b628465b39331923eccd9d4eecb62873e98746d3ac8087f7d2] <==
	I1101 10:20:39.268957       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:20:39.280438       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:20:39.286773       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:20:39.298386       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:20:39.299454       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:20:39.299568       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:20:39.299652       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:20:39.299690       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:20:39.300876       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:20:39.300876       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:20:39.300898       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:20:39.301300       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:20:39.302377       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:20:39.303039       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:20:39.305704       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:20:39.306881       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:20:39.306933       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:20:39.308006       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:20:39.310212       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:20:39.310253       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:20:39.313442       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:20:39.316670       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:20:39.327015       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:20:39.537098       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-678014" podCIDRs=["10.244.0.0/24"]
	I1101 10:21:24.445000       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6321c7839b028ab097508bc8cdbd5260b1a2f4638277fb54199a9eb9c54f99c9] <==
	I1101 10:20:40.942081       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:20:41.033663       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:20:41.135090       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:20:41.135137       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 10:20:41.135288       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:20:41.167392       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:20:41.167476       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:20:41.174896       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:20:41.175414       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:20:41.175492       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:20:41.177880       1 config.go:309] "Starting node config controller"
	I1101 10:20:41.177906       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:20:41.177916       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:20:41.177968       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:20:41.177999       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:20:41.179369       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:20:41.179390       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:20:41.177994       1 config.go:200] "Starting service config controller"
	I1101 10:20:41.181581       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:20:41.279921       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:20:41.281006       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:20:41.282219       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e6a1fd7f8e14979a40feb6843320111f30d0a989a0cdf5d147b658f1af7c9ce8] <==
	E1101 10:20:32.232906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:20:32.232974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:20:32.232988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:20:32.233097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:20:32.233141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:20:32.233212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:20:32.233244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:20:32.233319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:20:32.233416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:20:32.233549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:20:32.233711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:20:32.233821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:20:32.233826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:20:33.048362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:20:33.104963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:20:33.134470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:20:33.154809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:20:33.204191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:20:33.254028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:20:33.265368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 10:20:33.293461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:20:33.326911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:20:33.481225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:20:33.486545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1101 10:20:36.226516       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:20:35 embed-certs-678014 kubelet[1321]: I1101 10:20:35.536542    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-678014" podStartSLOduration=2.536525827 podStartE2EDuration="2.536525827s" podCreationTimestamp="2025-11-01 10:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:35.536405257 +0000 UTC m=+1.160774486" watchObservedRunningTime="2025-11-01 10:20:35.536525827 +0000 UTC m=+1.160895061"
	Nov 01 10:20:35 embed-certs-678014 kubelet[1321]: I1101 10:20:35.548373    1321 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:20:35 embed-certs-678014 kubelet[1321]: I1101 10:20:35.561642    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-678014" podStartSLOduration=1.5616210719999999 podStartE2EDuration="1.561621072s" podCreationTimestamp="2025-11-01 10:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:35.561550832 +0000 UTC m=+1.185920066" watchObservedRunningTime="2025-11-01 10:20:35.561621072 +0000 UTC m=+1.185990306"
	Nov 01 10:20:35 embed-certs-678014 kubelet[1321]: I1101 10:20:35.561972    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-678014" podStartSLOduration=1.561955133 podStartE2EDuration="1.561955133s" podCreationTimestamp="2025-11-01 10:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:35.549231912 +0000 UTC m=+1.173601139" watchObservedRunningTime="2025-11-01 10:20:35.561955133 +0000 UTC m=+1.186324362"
	Nov 01 10:20:39 embed-certs-678014 kubelet[1321]: I1101 10:20:39.631551    1321 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:20:39 embed-certs-678014 kubelet[1321]: I1101 10:20:39.632342    1321 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:20:40 embed-certs-678014 kubelet[1321]: I1101 10:20:40.491568    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9afe6a1c-b603-4bff-80ea-a8acd9e143ff-xtables-lock\") pod \"kindnet-fzb8b\" (UID: \"9afe6a1c-b603-4bff-80ea-a8acd9e143ff\") " pod="kube-system/kindnet-fzb8b"
	Nov 01 10:20:40 embed-certs-678014 kubelet[1321]: I1101 10:20:40.491633    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjn6s\" (UniqueName: \"kubernetes.io/projected/9afe6a1c-b603-4bff-80ea-a8acd9e143ff-kube-api-access-xjn6s\") pod \"kindnet-fzb8b\" (UID: \"9afe6a1c-b603-4bff-80ea-a8acd9e143ff\") " pod="kube-system/kindnet-fzb8b"
	Nov 01 10:20:40 embed-certs-678014 kubelet[1321]: I1101 10:20:40.491665    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2964bb1-7bfc-40ab-9ee9-8db9e09909ad-xtables-lock\") pod \"kube-proxy-tlw2d\" (UID: \"e2964bb1-7bfc-40ab-9ee9-8db9e09909ad\") " pod="kube-system/kube-proxy-tlw2d"
	Nov 01 10:20:40 embed-certs-678014 kubelet[1321]: I1101 10:20:40.491687    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2964bb1-7bfc-40ab-9ee9-8db9e09909ad-lib-modules\") pod \"kube-proxy-tlw2d\" (UID: \"e2964bb1-7bfc-40ab-9ee9-8db9e09909ad\") " pod="kube-system/kube-proxy-tlw2d"
	Nov 01 10:20:40 embed-certs-678014 kubelet[1321]: I1101 10:20:40.491708    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpvgb\" (UniqueName: \"kubernetes.io/projected/e2964bb1-7bfc-40ab-9ee9-8db9e09909ad-kube-api-access-zpvgb\") pod \"kube-proxy-tlw2d\" (UID: \"e2964bb1-7bfc-40ab-9ee9-8db9e09909ad\") " pod="kube-system/kube-proxy-tlw2d"
	Nov 01 10:20:40 embed-certs-678014 kubelet[1321]: I1101 10:20:40.491732    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9afe6a1c-b603-4bff-80ea-a8acd9e143ff-cni-cfg\") pod \"kindnet-fzb8b\" (UID: \"9afe6a1c-b603-4bff-80ea-a8acd9e143ff\") " pod="kube-system/kindnet-fzb8b"
	Nov 01 10:20:40 embed-certs-678014 kubelet[1321]: I1101 10:20:40.491761    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9afe6a1c-b603-4bff-80ea-a8acd9e143ff-lib-modules\") pod \"kindnet-fzb8b\" (UID: \"9afe6a1c-b603-4bff-80ea-a8acd9e143ff\") " pod="kube-system/kindnet-fzb8b"
	Nov 01 10:20:40 embed-certs-678014 kubelet[1321]: I1101 10:20:40.491787    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e2964bb1-7bfc-40ab-9ee9-8db9e09909ad-kube-proxy\") pod \"kube-proxy-tlw2d\" (UID: \"e2964bb1-7bfc-40ab-9ee9-8db9e09909ad\") " pod="kube-system/kube-proxy-tlw2d"
	Nov 01 10:20:41 embed-certs-678014 kubelet[1321]: I1101 10:20:41.549265    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-fzb8b" podStartSLOduration=1.5492384449999999 podStartE2EDuration="1.549238445s" podCreationTimestamp="2025-11-01 10:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:41.537128264 +0000 UTC m=+7.161497484" watchObservedRunningTime="2025-11-01 10:20:41.549238445 +0000 UTC m=+7.173607681"
	Nov 01 10:20:42 embed-certs-678014 kubelet[1321]: I1101 10:20:42.870094    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tlw2d" podStartSLOduration=2.87006705 podStartE2EDuration="2.87006705s" podCreationTimestamp="2025-11-01 10:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:20:41.550778124 +0000 UTC m=+7.175147357" watchObservedRunningTime="2025-11-01 10:20:42.87006705 +0000 UTC m=+8.494436284"
	Nov 01 10:21:21 embed-certs-678014 kubelet[1321]: I1101 10:21:21.506603    1321 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:21:21 embed-certs-678014 kubelet[1321]: I1101 10:21:21.584045    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b08350a-b7d7-4564-8275-a42d7e42cae1-config-volume\") pod \"coredns-66bc5c9577-vlf7q\" (UID: \"6b08350a-b7d7-4564-8275-a42d7e42cae1\") " pod="kube-system/coredns-66bc5c9577-vlf7q"
	Nov 01 10:21:21 embed-certs-678014 kubelet[1321]: I1101 10:21:21.584098    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d8b98733-a837-48d1-aaee-f8d72b5e81f3-tmp\") pod \"storage-provisioner\" (UID: \"d8b98733-a837-48d1-aaee-f8d72b5e81f3\") " pod="kube-system/storage-provisioner"
	Nov 01 10:21:21 embed-certs-678014 kubelet[1321]: I1101 10:21:21.584187    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wzrx\" (UniqueName: \"kubernetes.io/projected/6b08350a-b7d7-4564-8275-a42d7e42cae1-kube-api-access-7wzrx\") pod \"coredns-66bc5c9577-vlf7q\" (UID: \"6b08350a-b7d7-4564-8275-a42d7e42cae1\") " pod="kube-system/coredns-66bc5c9577-vlf7q"
	Nov 01 10:21:21 embed-certs-678014 kubelet[1321]: I1101 10:21:21.584255    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j5jf\" (UniqueName: \"kubernetes.io/projected/d8b98733-a837-48d1-aaee-f8d72b5e81f3-kube-api-access-9j5jf\") pod \"storage-provisioner\" (UID: \"d8b98733-a837-48d1-aaee-f8d72b5e81f3\") " pod="kube-system/storage-provisioner"
	Nov 01 10:21:22 embed-certs-678014 kubelet[1321]: I1101 10:21:22.628119    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vlf7q" podStartSLOduration=42.628094946 podStartE2EDuration="42.628094946s" podCreationTimestamp="2025-11-01 10:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:21:22.627865711 +0000 UTC m=+48.252234945" watchObservedRunningTime="2025-11-01 10:21:22.628094946 +0000 UTC m=+48.252464180"
	Nov 01 10:21:22 embed-certs-678014 kubelet[1321]: I1101 10:21:22.654560    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.654533266 podStartE2EDuration="41.654533266s" podCreationTimestamp="2025-11-01 10:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:21:22.641878766 +0000 UTC m=+48.266248000" watchObservedRunningTime="2025-11-01 10:21:22.654533266 +0000 UTC m=+48.278902506"
	Nov 01 10:21:25 embed-certs-678014 kubelet[1321]: I1101 10:21:25.006309    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr9fc\" (UniqueName: \"kubernetes.io/projected/fcbbe122-495c-462f-913f-f3f2b1b23890-kube-api-access-fr9fc\") pod \"busybox\" (UID: \"fcbbe122-495c-462f-913f-f3f2b1b23890\") " pod="default/busybox"
	Nov 01 10:21:27 embed-certs-678014 kubelet[1321]: I1101 10:21:27.644426    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.445030936 podStartE2EDuration="3.644404206s" podCreationTimestamp="2025-11-01 10:21:24 +0000 UTC" firstStartedPulling="2025-11-01 10:21:25.216987378 +0000 UTC m=+50.841356604" lastFinishedPulling="2025-11-01 10:21:27.416360658 +0000 UTC m=+53.040729874" observedRunningTime="2025-11-01 10:21:27.644034597 +0000 UTC m=+53.268403837" watchObservedRunningTime="2025-11-01 10:21:27.644404206 +0000 UTC m=+53.268773439"
	
	
	==> storage-provisioner [7306b15dd9102139de5b097f804c8fd30ed4a58fb3f789392724b003d8245a37] <==
	I1101 10:21:21.916201       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:21:21.925346       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:21:21.925462       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:21:21.928424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:21.934699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:21:21.935000       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:21:21.935051       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a8bf23cc-2536-4ed5-ae0e-07000c30e5da", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-678014_2647d8de-4c6e-4bf9-a462-a53fdad01849 became leader
	I1101 10:21:21.935174       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-678014_2647d8de-4c6e-4bf9-a462-a53fdad01849!
	W1101 10:21:21.937307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:21.941698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:21:22.035681       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-678014_2647d8de-4c6e-4bf9-a462-a53fdad01849!
	W1101 10:21:23.945107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:23.949515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:25.953112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:25.957409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:27.961347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:27.965614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:29.969575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:29.974059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:31.977335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:32.142436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:34.146500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:34.151316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-678014 -n embed-certs-678014
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-678014 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-535119 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-535119 --alsologtostderr -v=1: exit status 80 (2.401879997s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-535119 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:22:37.034171  800472 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:22:37.034310  800472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:22:37.034320  800472 out.go:374] Setting ErrFile to fd 2...
	I1101 10:22:37.034326  800472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:22:37.034552  800472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:22:37.034813  800472 out.go:368] Setting JSON to false
	I1101 10:22:37.034875  800472 mustload.go:66] Loading cluster: default-k8s-diff-port-535119
	I1101 10:22:37.035231  800472 config.go:182] Loaded profile config "default-k8s-diff-port-535119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:37.035657  800472 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-535119 --format={{.State.Status}}
	I1101 10:22:37.055886  800472 host.go:66] Checking if "default-k8s-diff-port-535119" exists ...
	I1101 10:22:37.056207  800472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:22:37.138889  800472 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-01 10:22:37.124387054 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:22:37.139759  800472 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-535119 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:22:37.141200  800472 out.go:179] * Pausing node default-k8s-diff-port-535119 ... 
	I1101 10:22:37.142330  800472 host.go:66] Checking if "default-k8s-diff-port-535119" exists ...
	I1101 10:22:37.142696  800472 ssh_runner.go:195] Run: systemctl --version
	I1101 10:22:37.142751  800472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-535119
	I1101 10:22:37.170975  800472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/default-k8s-diff-port-535119/id_rsa Username:docker}
	I1101 10:22:37.284189  800472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:22:37.309803  800472 pause.go:52] kubelet running: true
	I1101 10:22:37.309925  800472 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:22:37.484928  800472 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:22:37.485026  800472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:22:37.558393  800472 cri.go:89] found id: "598929db993d5341c4bb379640a12b18a006b9760ad6912747a9d78467eab995"
	I1101 10:22:37.558416  800472 cri.go:89] found id: "f0858d36c66240ef67cdb11f52c1eeb8c54ef043dfd7a32c19597f2b57e5280e"
	I1101 10:22:37.558419  800472 cri.go:89] found id: "63992ee9b84cb33f02c4ed9eb2ca3b69146006e8c4eda10c81fd8d5c3fd45734"
	I1101 10:22:37.558422  800472 cri.go:89] found id: "4d191bebc00a77146f23a5ffeba9c52ca18645185d9f87fdeb8cedc3bfe48be8"
	I1101 10:22:37.558425  800472 cri.go:89] found id: "5e268959350f62859e2f83296fa5f9105de7d5d5542f85a320b57d5161ea134a"
	I1101 10:22:37.558428  800472 cri.go:89] found id: "ca9bddec198066b48591adadfc97f2cc7f80b78bc9f559075ca0db64b5aea9f8"
	I1101 10:22:37.558431  800472 cri.go:89] found id: "48823e7d320e569e3fa912f75beb7c80e3cfa0240efaf9018fddf16a5c86137b"
	I1101 10:22:37.558433  800472 cri.go:89] found id: "d8dbc23691e83e3bbb62dac3c1cc0308bcfb69a5f2729f19473bfa3c56e8bea3"
	I1101 10:22:37.558435  800472 cri.go:89] found id: "789a7612b3f36c70178a0d6094a84b47c8beecea55e7d6586762c03604f5c2ad"
	I1101 10:22:37.558441  800472 cri.go:89] found id: "26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21"
	I1101 10:22:37.558444  800472 cri.go:89] found id: "367e9352dc1b956483b3e41c8f68f7c436e82a3befeac0422a3111dc25ea1263"
	I1101 10:22:37.558446  800472 cri.go:89] found id: ""
	I1101 10:22:37.558494  800472 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:22:37.571807  800472 retry.go:31] will retry after 222.269612ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:22:37Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:22:37.794276  800472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:22:37.808116  800472 pause.go:52] kubelet running: false
	I1101 10:22:37.808174  800472 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:22:37.968698  800472 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:22:37.968783  800472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:22:38.045587  800472 cri.go:89] found id: "598929db993d5341c4bb379640a12b18a006b9760ad6912747a9d78467eab995"
	I1101 10:22:38.045610  800472 cri.go:89] found id: "f0858d36c66240ef67cdb11f52c1eeb8c54ef043dfd7a32c19597f2b57e5280e"
	I1101 10:22:38.045614  800472 cri.go:89] found id: "63992ee9b84cb33f02c4ed9eb2ca3b69146006e8c4eda10c81fd8d5c3fd45734"
	I1101 10:22:38.045617  800472 cri.go:89] found id: "4d191bebc00a77146f23a5ffeba9c52ca18645185d9f87fdeb8cedc3bfe48be8"
	I1101 10:22:38.045620  800472 cri.go:89] found id: "5e268959350f62859e2f83296fa5f9105de7d5d5542f85a320b57d5161ea134a"
	I1101 10:22:38.045624  800472 cri.go:89] found id: "ca9bddec198066b48591adadfc97f2cc7f80b78bc9f559075ca0db64b5aea9f8"
	I1101 10:22:38.045626  800472 cri.go:89] found id: "48823e7d320e569e3fa912f75beb7c80e3cfa0240efaf9018fddf16a5c86137b"
	I1101 10:22:38.045629  800472 cri.go:89] found id: "d8dbc23691e83e3bbb62dac3c1cc0308bcfb69a5f2729f19473bfa3c56e8bea3"
	I1101 10:22:38.045631  800472 cri.go:89] found id: "789a7612b3f36c70178a0d6094a84b47c8beecea55e7d6586762c03604f5c2ad"
	I1101 10:22:38.045638  800472 cri.go:89] found id: "26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21"
	I1101 10:22:38.045659  800472 cri.go:89] found id: "367e9352dc1b956483b3e41c8f68f7c436e82a3befeac0422a3111dc25ea1263"
	I1101 10:22:38.045670  800472 cri.go:89] found id: ""
	I1101 10:22:38.045710  800472 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:22:38.058250  800472 retry.go:31] will retry after 380.646421ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:22:38Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:22:38.439895  800472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:22:38.454612  800472 pause.go:52] kubelet running: false
	I1101 10:22:38.454669  800472 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:22:38.607176  800472 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:22:38.607289  800472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:22:38.699513  800472 cri.go:89] found id: "598929db993d5341c4bb379640a12b18a006b9760ad6912747a9d78467eab995"
	I1101 10:22:38.699546  800472 cri.go:89] found id: "f0858d36c66240ef67cdb11f52c1eeb8c54ef043dfd7a32c19597f2b57e5280e"
	I1101 10:22:38.699553  800472 cri.go:89] found id: "63992ee9b84cb33f02c4ed9eb2ca3b69146006e8c4eda10c81fd8d5c3fd45734"
	I1101 10:22:38.699558  800472 cri.go:89] found id: "4d191bebc00a77146f23a5ffeba9c52ca18645185d9f87fdeb8cedc3bfe48be8"
	I1101 10:22:38.699563  800472 cri.go:89] found id: "5e268959350f62859e2f83296fa5f9105de7d5d5542f85a320b57d5161ea134a"
	I1101 10:22:38.699567  800472 cri.go:89] found id: "ca9bddec198066b48591adadfc97f2cc7f80b78bc9f559075ca0db64b5aea9f8"
	I1101 10:22:38.699571  800472 cri.go:89] found id: "48823e7d320e569e3fa912f75beb7c80e3cfa0240efaf9018fddf16a5c86137b"
	I1101 10:22:38.699575  800472 cri.go:89] found id: "d8dbc23691e83e3bbb62dac3c1cc0308bcfb69a5f2729f19473bfa3c56e8bea3"
	I1101 10:22:38.699579  800472 cri.go:89] found id: "789a7612b3f36c70178a0d6094a84b47c8beecea55e7d6586762c03604f5c2ad"
	I1101 10:22:38.699598  800472 cri.go:89] found id: "26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21"
	I1101 10:22:38.699602  800472 cri.go:89] found id: "367e9352dc1b956483b3e41c8f68f7c436e82a3befeac0422a3111dc25ea1263"
	I1101 10:22:38.699607  800472 cri.go:89] found id: ""
	I1101 10:22:38.699656  800472 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:22:38.713139  800472 retry.go:31] will retry after 360.247648ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:22:38Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:22:39.074568  800472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:22:39.088886  800472 pause.go:52] kubelet running: false
	I1101 10:22:39.088946  800472 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:22:39.254974  800472 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:22:39.255061  800472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:22:39.335425  800472 cri.go:89] found id: "598929db993d5341c4bb379640a12b18a006b9760ad6912747a9d78467eab995"
	I1101 10:22:39.335449  800472 cri.go:89] found id: "f0858d36c66240ef67cdb11f52c1eeb8c54ef043dfd7a32c19597f2b57e5280e"
	I1101 10:22:39.335455  800472 cri.go:89] found id: "63992ee9b84cb33f02c4ed9eb2ca3b69146006e8c4eda10c81fd8d5c3fd45734"
	I1101 10:22:39.335461  800472 cri.go:89] found id: "4d191bebc00a77146f23a5ffeba9c52ca18645185d9f87fdeb8cedc3bfe48be8"
	I1101 10:22:39.335466  800472 cri.go:89] found id: "5e268959350f62859e2f83296fa5f9105de7d5d5542f85a320b57d5161ea134a"
	I1101 10:22:39.335472  800472 cri.go:89] found id: "ca9bddec198066b48591adadfc97f2cc7f80b78bc9f559075ca0db64b5aea9f8"
	I1101 10:22:39.335477  800472 cri.go:89] found id: "48823e7d320e569e3fa912f75beb7c80e3cfa0240efaf9018fddf16a5c86137b"
	I1101 10:22:39.335481  800472 cri.go:89] found id: "d8dbc23691e83e3bbb62dac3c1cc0308bcfb69a5f2729f19473bfa3c56e8bea3"
	I1101 10:22:39.335485  800472 cri.go:89] found id: "789a7612b3f36c70178a0d6094a84b47c8beecea55e7d6586762c03604f5c2ad"
	I1101 10:22:39.335493  800472 cri.go:89] found id: "26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21"
	I1101 10:22:39.335498  800472 cri.go:89] found id: "367e9352dc1b956483b3e41c8f68f7c436e82a3befeac0422a3111dc25ea1263"
	I1101 10:22:39.335502  800472 cri.go:89] found id: ""
	I1101 10:22:39.335551  800472 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:22:39.350412  800472 out.go:203] 
	W1101 10:22:39.351496  800472 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:22:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:22:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:22:39.351514  800472 out.go:285] * 
	* 
	W1101 10:22:39.355728  800472 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:22:39.360198  800472 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-535119 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-535119
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-535119:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9",
	        "Created": "2025-11-01T10:20:27.432288023Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 784759,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:21:36.480851986Z",
	            "FinishedAt": "2025-11-01T10:21:35.447458484Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9/hostname",
	        "HostsPath": "/var/lib/docker/containers/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9/hosts",
	        "LogPath": "/var/lib/docker/containers/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9-json.log",
	        "Name": "/default-k8s-diff-port-535119",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-535119:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-535119",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9",
	                "LowerDir": "/var/lib/docker/overlay2/e9a0c3ffe8511d599910c2afa408a05e6eafb69152218c2a88b5d554575b9de6-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e9a0c3ffe8511d599910c2afa408a05e6eafb69152218c2a88b5d554575b9de6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e9a0c3ffe8511d599910c2afa408a05e6eafb69152218c2a88b5d554575b9de6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e9a0c3ffe8511d599910c2afa408a05e6eafb69152218c2a88b5d554575b9de6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-535119",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-535119/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-535119",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-535119",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-535119",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8d2674fbdba086459517dc68b44955fb1fb88643c4b55f175f960d6e7fec5032",
	            "SandboxKey": "/var/run/docker/netns/8d2674fbdba0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33218"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33219"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33222"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33220"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33221"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-535119": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:07:15:ff:64:17",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "adb717c923a7eb081a40be81c8474558a336c362715ad5409671064c3146fad7",
	                    "EndpointID": "a1fb45f892bb846ab0d7f5f7b490d931d5be5fbf3ccb48454c7ad8fca3040a3e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-535119",
	                        "709c1dd68365"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-535119 -n default-k8s-diff-port-535119
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-535119 -n default-k8s-diff-port-535119: exit status 2 (477.765051ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-535119 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-535119 logs -n 25: (1.420562471s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-456743 sudo journalctl -xeu kubelet --all --full --no-pager                                                                    │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl cat docker --no-pager                                                                                    │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /etc/docker/daemon.json                                                                                        │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo docker system info                                                                                                 │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cri-dockerd --version                                                                                              │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl cat containerd --no-pager                                                                                │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /etc/containerd/config.toml                                                                                    │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo containerd config dump                                                                                             │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl cat crio --no-pager                                                                                      │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo crio config                                                                                                        │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ delete  │ -p auto-456743                                                                                                                         │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ image   │ default-k8s-diff-port-535119 image list --format=json                                                                                  │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ pause   │ -p default-k8s-diff-port-535119 --alsologtostderr -v=1                                                                                 │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ start   │ -p calico-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-456743                │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:22:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:22:39.426748  801153 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:22:39.427130  801153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:22:39.427140  801153 out.go:374] Setting ErrFile to fd 2...
	I1101 10:22:39.427148  801153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:22:39.427502  801153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:22:39.428380  801153 out.go:368] Setting JSON to false
	I1101 10:22:39.430093  801153 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11096,"bootTime":1761981463,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:22:39.430238  801153 start.go:143] virtualization: kvm guest
	I1101 10:22:39.432643  801153 out.go:179] * [calico-456743] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:22:39.433991  801153 notify.go:221] Checking for updates...
	I1101 10:22:39.434023  801153 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:22:39.435975  801153 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:22:39.437039  801153 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:22:39.438076  801153 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:22:39.439072  801153 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:22:39.440096  801153 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:22:39.442309  801153 config.go:182] Loaded profile config "default-k8s-diff-port-535119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:39.442449  801153 config.go:182] Loaded profile config "embed-certs-678014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:39.442571  801153 config.go:182] Loaded profile config "kindnet-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:39.442718  801153 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:22:39.472034  801153 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:22:39.472143  801153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:22:39.546867  801153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:22:39.534589199 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:22:39.546978  801153 docker.go:319] overlay module found
	I1101 10:22:39.548335  801153 out.go:179] * Using the docker driver based on user configuration
	I1101 10:22:39.549226  801153 start.go:309] selected driver: docker
	I1101 10:22:39.549246  801153 start.go:930] validating driver "docker" against <nil>
	I1101 10:22:39.549261  801153 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:22:39.549809  801153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:22:39.618417  801153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:22:39.606236085 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:22:39.618726  801153 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:22:39.619053  801153 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:22:39.620465  801153 out.go:179] * Using Docker driver with root privileges
	I1101 10:22:39.621492  801153 cni.go:84] Creating CNI manager for "calico"
	I1101 10:22:39.621517  801153 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1101 10:22:39.621619  801153 start.go:353] cluster config:
	{Name:calico-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-456743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:22:39.623223  801153 out.go:179] * Starting "calico-456743" primary control-plane node in "calico-456743" cluster
	I1101 10:22:39.624210  801153 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:22:39.625150  801153 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:22:39.626015  801153 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:22:39.626071  801153 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:22:39.626089  801153 cache.go:59] Caching tarball of preloaded images
	I1101 10:22:39.626108  801153 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:22:39.626249  801153 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:22:39.626263  801153 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:22:39.626400  801153 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/config.json ...
	I1101 10:22:39.626431  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/config.json: {Name:mk90135cbb56ada87dd0b110a6a847dd8a879021 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:39.650437  801153 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:22:39.650463  801153 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:22:39.650486  801153 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:22:39.650535  801153 start.go:360] acquireMachinesLock for calico-456743: {Name:mk47d56ccd1a80cdcd4a1e14702b1203b633ff91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:22:39.650665  801153 start.go:364] duration metric: took 102.257µs to acquireMachinesLock for "calico-456743"
	I1101 10:22:39.650699  801153 start.go:93] Provisioning new machine with config: &{Name:calico-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-456743 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:22:39.650785  801153 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:22:38.587657  793145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:22:39.087824  793145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:22:39.588277  793145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:22:39.673505  793145 kubeadm.go:1114] duration metric: took 4.182845298s to wait for elevateKubeSystemPrivileges
	I1101 10:22:39.673541  793145 kubeadm.go:403] duration metric: took 16.176743829s to StartCluster
	I1101 10:22:39.673566  793145 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:39.673634  793145 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:22:39.676184  793145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:39.676524  793145 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:22:39.677729  793145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:22:39.677953  793145 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:22:39.678036  793145 config.go:182] Loaded profile config "kindnet-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:39.678070  793145 addons.go:70] Setting storage-provisioner=true in profile "kindnet-456743"
	I1101 10:22:39.678082  793145 addons.go:70] Setting default-storageclass=true in profile "kindnet-456743"
	I1101 10:22:39.678094  793145 addons.go:239] Setting addon storage-provisioner=true in "kindnet-456743"
	I1101 10:22:39.678096  793145 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-456743"
	I1101 10:22:39.678129  793145 host.go:66] Checking if "kindnet-456743" exists ...
	I1101 10:22:39.678482  793145 cli_runner.go:164] Run: docker container inspect kindnet-456743 --format={{.State.Status}}
	I1101 10:22:39.678792  793145 cli_runner.go:164] Run: docker container inspect kindnet-456743 --format={{.State.Status}}
	I1101 10:22:39.678890  793145 out.go:179] * Verifying Kubernetes components...
	I1101 10:22:39.679630  793145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:22:39.706731  793145 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Nov 01 10:22:03 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:03.357996658Z" level=info msg="Started container" PID=1748 containerID=5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7/dashboard-metrics-scraper id=f883c8cd-f37c-4c93-aaf9-da64d926b4a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3964e2310a4737c1fca666a42d7c6db7926ea1d8e42129e0f090f0a1c14adb3c
	Nov 01 10:22:03 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:03.952810229Z" level=info msg="Removing container: 3cadffecceef447454f37bfe36e6f4adb00b5928fe1dd782f33fa78cde9e8dc9" id=94a1364c-1a60-46cb-8f10-be800238a702 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:22:03 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:03.970998337Z" level=info msg="Removed container 3cadffecceef447454f37bfe36e6f4adb00b5928fe1dd782f33fa78cde9e8dc9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7/dashboard-metrics-scraper" id=94a1364c-1a60-46cb-8f10-be800238a702 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:22:17 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:17.995576926Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1ce73a9c-8bd6-47bf-bea5-1aa2476ce6ea name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:17 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:17.996633058Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=61a94c91-d874-4069-9d5a-9caa027d0fdb name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:17 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:17.997829947Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2e31ce1f-4682-4f5d-b519-dbf295aed339 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:17 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:17.998040858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.003948955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.004163602Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/28f9c884743b4b73e2aa0e8db09eb652b032e7b3665e6d54fc283539f019de78/merged/etc/passwd: no such file or directory"
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.00420543Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/28f9c884743b4b73e2aa0e8db09eb652b032e7b3665e6d54fc283539f019de78/merged/etc/group: no such file or directory"
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.004542142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.033588573Z" level=info msg="Created container 598929db993d5341c4bb379640a12b18a006b9760ad6912747a9d78467eab995: kube-system/storage-provisioner/storage-provisioner" id=2e31ce1f-4682-4f5d-b519-dbf295aed339 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.034389704Z" level=info msg="Starting container: 598929db993d5341c4bb379640a12b18a006b9760ad6912747a9d78467eab995" id=22100c98-b57a-43d0-91f9-b4478617f3f8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.036667347Z" level=info msg="Started container" PID=1762 containerID=598929db993d5341c4bb379640a12b18a006b9760ad6912747a9d78467eab995 description=kube-system/storage-provisioner/storage-provisioner id=22100c98-b57a-43d0-91f9-b4478617f3f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15fe6fdd58f947a6f6f2060bdedb454d57e35e568b215902a8399436e6a07229
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.825237072Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2d9adc30-bc64-469b-8dad-daaa5bc75829 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.826353289Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e6b47d22-5743-4dfd-b566-9547572bcc97 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.827371326Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7/dashboard-metrics-scraper" id=ac4d9184-c4c5-4426-9a1c-0e0c23ed8099 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.827523318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.834396106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.835089017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.872818183Z" level=info msg="Created container 26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7/dashboard-metrics-scraper" id=ac4d9184-c4c5-4426-9a1c-0e0c23ed8099 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.873659778Z" level=info msg="Starting container: 26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21" id=93a22018-e72b-423c-acd9-6b4448f1027c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.87646548Z" level=info msg="Started container" PID=1797 containerID=26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7/dashboard-metrics-scraper id=93a22018-e72b-423c-acd9-6b4448f1027c name=/runtime.v1.RuntimeService/StartContainer sandboxID=3964e2310a4737c1fca666a42d7c6db7926ea1d8e42129e0f090f0a1c14adb3c
	Nov 01 10:22:27 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:27.024180307Z" level=info msg="Removing container: 5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d" id=97eb0b4a-e36a-4384-a4e6-2264ce5c49ff name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:22:27 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:27.039983698Z" level=info msg="Removed container 5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7/dashboard-metrics-scraper" id=97eb0b4a-e36a-4384-a4e6-2264ce5c49ff name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	26092099342bb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   3                   3964e2310a473       dashboard-metrics-scraper-6ffb444bf9-plkm7             kubernetes-dashboard
	598929db993d5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   15fe6fdd58f94       storage-provisioner                                    kube-system
	367e9352dc1b9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   a4cb280f04767       kubernetes-dashboard-855c9754f9-mgn6f                  kubernetes-dashboard
	5f74118a7dcc1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   a1c3396c5fb60       busybox                                                default
	f0858d36c6624       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   a0d4bd0be9084       coredns-66bc5c9577-c4s2q                               kube-system
	63992ee9b84cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   15fe6fdd58f94       storage-provisioner                                    kube-system
	4d191bebc00a7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   1ce1f924eda8a       kube-proxy-6tl8q                                       kube-system
	5e268959350f6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   af75297061f94       kindnet-fvr2t                                          kube-system
	ca9bddec19806       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   c758509cf8c30       etcd-default-k8s-diff-port-535119                      kube-system
	48823e7d320e5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   7404b45af1ea5       kube-scheduler-default-k8s-diff-port-535119            kube-system
	d8dbc23691e83       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   1846f03df77ea       kube-controller-manager-default-k8s-diff-port-535119   kube-system
	789a7612b3f36       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   1b7bf1623769e       kube-apiserver-default-k8s-diff-port-535119            kube-system
	
	
	==> coredns [f0858d36c66240ef67cdb11f52c1eeb8c54ef043dfd7a32c19597f2b57e5280e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48596 - 44903 "HINFO IN 6803641912873057387.3869622318025249585. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032000292s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-535119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-535119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=default-k8s-diff-port-535119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_20_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:20:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-535119
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:22:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:22:16 +0000   Sat, 01 Nov 2025 10:20:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:22:16 +0000   Sat, 01 Nov 2025 10:20:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:22:16 +0000   Sat, 01 Nov 2025 10:20:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:22:16 +0000   Sat, 01 Nov 2025 10:21:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-535119
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a6fa098f-22f7-43f7-a2bd-0a700ca3d7aa
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-c4s2q                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-535119                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-fvr2t                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-535119             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-535119    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-6tl8q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-535119             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-plkm7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mgn6f                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node default-k8s-diff-port-535119 event: Registered Node default-k8s-diff-port-535119 in Controller
	  Normal  NodeReady                98s                kubelet          Node default-k8s-diff-port-535119 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node default-k8s-diff-port-535119 event: Registered Node default-k8s-diff-port-535119 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [ca9bddec198066b48591adadfc97f2cc7f80b78bc9f559075ca0db64b5aea9f8] <==
	{"level":"warn","ts":"2025-11-01T10:21:44.983924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:44.992024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:44.999316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.007968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.015370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.025594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.034268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.042068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.049779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.058644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.065410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.073235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.081920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.089350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.098339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.105951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.112820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.126266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.133199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.141974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.163443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.174752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.184028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.245909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41630","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:22:13.441310Z","caller":"traceutil/trace.go:172","msg":"trace[408998277] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"132.778794ms","start":"2025-11-01T10:22:13.308495Z","end":"2025-11-01T10:22:13.441274Z","steps":["trace[408998277] 'process raft request'  (duration: 130.426475ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:22:40 up  3:04,  0 user,  load average: 4.19, 3.81, 3.00
	Linux default-k8s-diff-port-535119 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e268959350f62859e2f83296fa5f9105de7d5d5542f85a320b57d5161ea134a] <==
	I1101 10:21:47.372203       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:21:47.372483       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:21:47.372653       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:21:47.372672       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:21:47.372698       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:21:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:21:47.668954       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:21:47.669029       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:21:47.669046       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:21:47.669230       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:21:47.969674       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:21:47.969848       1 metrics.go:72] Registering metrics
	I1101 10:21:47.969969       1 controller.go:711] "Syncing nftables rules"
	I1101 10:21:57.576884       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:21:57.576958       1 main.go:301] handling current node
	I1101 10:22:07.579233       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:22:07.579274       1 main.go:301] handling current node
	I1101 10:22:17.577062       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:22:17.577107       1 main.go:301] handling current node
	I1101 10:22:27.577002       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:22:27.577050       1 main.go:301] handling current node
	I1101 10:22:37.579978       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:22:37.580033       1 main.go:301] handling current node
	
	
	==> kube-apiserver [789a7612b3f36c70178a0d6094a84b47c8beecea55e7d6586762c03604f5c2ad] <==
	I1101 10:21:45.908637       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:21:45.908725       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:21:45.908756       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:21:45.908781       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:21:45.905898       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:21:45.909442       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:21:45.916012       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:21:45.916165       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:21:45.924147       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 10:21:45.938054       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:21:45.940026       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:21:45.948473       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:21:45.964633       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:21:46.333746       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:21:46.373739       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:21:46.403861       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:21:46.421338       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:21:46.437083       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:21:46.498364       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.204.13"}
	I1101 10:21:46.512170       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.173.79"}
	I1101 10:21:46.801694       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:21:49.376371       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:21:49.726017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:21:49.726017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:21:49.776615       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d8dbc23691e83e3bbb62dac3c1cc0308bcfb69a5f2729f19473bfa3c56e8bea3] <==
	I1101 10:21:49.222073       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:21:49.222356       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:21:49.222525       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:21:49.224130       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:21:49.224345       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:21:49.225431       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:21:49.225552       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:21:49.227908       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:21:49.229198       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:21:49.229252       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:21:49.229284       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:21:49.229292       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:21:49.229297       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:21:49.230308       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:21:49.231496       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:21:49.233647       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:21:49.233681       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:21:49.233776       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:21:49.234944       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:21:49.238159       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:21:49.240391       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:21:49.245575       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:21:49.245597       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:21:49.245605       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:21:49.249125       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4d191bebc00a77146f23a5ffeba9c52ca18645185d9f87fdeb8cedc3bfe48be8] <==
	I1101 10:21:47.233674       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:21:47.302799       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:21:47.403422       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:21:47.403461       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:21:47.403569       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:21:47.423184       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:21:47.423257       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:21:47.429617       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:21:47.430096       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:21:47.430138       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:21:47.431714       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:21:47.431795       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:21:47.431827       1 config.go:309] "Starting node config controller"
	I1101 10:21:47.431830       1 config.go:200] "Starting service config controller"
	I1101 10:21:47.431852       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:21:47.431829       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:21:47.431865       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:21:47.431861       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:21:47.431871       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:21:47.532231       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:21:47.532257       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:21:47.532296       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [48823e7d320e569e3fa912f75beb7c80e3cfa0240efaf9018fddf16a5c86137b] <==
	I1101 10:21:44.253667       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:21:45.818166       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:21:45.818201       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:21:45.818213       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:21:45.818222       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:21:45.926485       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:21:45.926583       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:21:45.930246       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:21:45.930494       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:21:45.930401       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:21:45.930555       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:21:46.031665       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:21:52 default-k8s-diff-port-535119 kubelet[717]: I1101 10:21:52.889021     717 scope.go:117] "RemoveContainer" containerID="beecce84e8bf53b7edd5808620bc33297cd1f8853907f494b92cac8f0f2800ce"
	Nov 01 10:21:53 default-k8s-diff-port-535119 kubelet[717]: I1101 10:21:53.505383     717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:21:53 default-k8s-diff-port-535119 kubelet[717]: I1101 10:21:53.899001     717 scope.go:117] "RemoveContainer" containerID="beecce84e8bf53b7edd5808620bc33297cd1f8853907f494b92cac8f0f2800ce"
	Nov 01 10:21:53 default-k8s-diff-port-535119 kubelet[717]: I1101 10:21:53.899334     717 scope.go:117] "RemoveContainer" containerID="3cadffecceef447454f37bfe36e6f4adb00b5928fe1dd782f33fa78cde9e8dc9"
	Nov 01 10:21:53 default-k8s-diff-port-535119 kubelet[717]: E1101 10:21:53.899506     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-plkm7_kubernetes-dashboard(e860abd1-45c7-4e43-bb3c-6fff8cf44334)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7" podUID="e860abd1-45c7-4e43-bb3c-6fff8cf44334"
	Nov 01 10:21:54 default-k8s-diff-port-535119 kubelet[717]: I1101 10:21:54.908795     717 scope.go:117] "RemoveContainer" containerID="3cadffecceef447454f37bfe36e6f4adb00b5928fe1dd782f33fa78cde9e8dc9"
	Nov 01 10:21:54 default-k8s-diff-port-535119 kubelet[717]: E1101 10:21:54.908983     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-plkm7_kubernetes-dashboard(e860abd1-45c7-4e43-bb3c-6fff8cf44334)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7" podUID="e860abd1-45c7-4e43-bb3c-6fff8cf44334"
	Nov 01 10:21:57 default-k8s-diff-port-535119 kubelet[717]: I1101 10:21:57.929691     717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mgn6f" podStartSLOduration=2.060232897 podStartE2EDuration="8.929669954s" podCreationTimestamp="2025-11-01 10:21:49 +0000 UTC" firstStartedPulling="2025-11-01 10:21:50.180324749 +0000 UTC m=+7.461441256" lastFinishedPulling="2025-11-01 10:21:57.049761789 +0000 UTC m=+14.330878313" observedRunningTime="2025-11-01 10:21:57.929556078 +0000 UTC m=+15.210672587" watchObservedRunningTime="2025-11-01 10:21:57.929669954 +0000 UTC m=+15.210786490"
	Nov 01 10:22:03 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:03.301361     717 scope.go:117] "RemoveContainer" containerID="3cadffecceef447454f37bfe36e6f4adb00b5928fe1dd782f33fa78cde9e8dc9"
	Nov 01 10:22:03 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:03.949733     717 scope.go:117] "RemoveContainer" containerID="3cadffecceef447454f37bfe36e6f4adb00b5928fe1dd782f33fa78cde9e8dc9"
	Nov 01 10:22:03 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:03.950212     717 scope.go:117] "RemoveContainer" containerID="5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d"
	Nov 01 10:22:03 default-k8s-diff-port-535119 kubelet[717]: E1101 10:22:03.950445     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-plkm7_kubernetes-dashboard(e860abd1-45c7-4e43-bb3c-6fff8cf44334)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7" podUID="e860abd1-45c7-4e43-bb3c-6fff8cf44334"
	Nov 01 10:22:13 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:13.300940     717 scope.go:117] "RemoveContainer" containerID="5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d"
	Nov 01 10:22:13 default-k8s-diff-port-535119 kubelet[717]: E1101 10:22:13.301238     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-plkm7_kubernetes-dashboard(e860abd1-45c7-4e43-bb3c-6fff8cf44334)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7" podUID="e860abd1-45c7-4e43-bb3c-6fff8cf44334"
	Nov 01 10:22:17 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:17.995186     717 scope.go:117] "RemoveContainer" containerID="63992ee9b84cb33f02c4ed9eb2ca3b69146006e8c4eda10c81fd8d5c3fd45734"
	Nov 01 10:22:26 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:26.824682     717 scope.go:117] "RemoveContainer" containerID="5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d"
	Nov 01 10:22:27 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:27.022603     717 scope.go:117] "RemoveContainer" containerID="5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d"
	Nov 01 10:22:27 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:27.022961     717 scope.go:117] "RemoveContainer" containerID="26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21"
	Nov 01 10:22:27 default-k8s-diff-port-535119 kubelet[717]: E1101 10:22:27.023335     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-plkm7_kubernetes-dashboard(e860abd1-45c7-4e43-bb3c-6fff8cf44334)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7" podUID="e860abd1-45c7-4e43-bb3c-6fff8cf44334"
	Nov 01 10:22:33 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:33.301237     717 scope.go:117] "RemoveContainer" containerID="26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21"
	Nov 01 10:22:33 default-k8s-diff-port-535119 kubelet[717]: E1101 10:22:33.301484     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-plkm7_kubernetes-dashboard(e860abd1-45c7-4e43-bb3c-6fff8cf44334)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7" podUID="e860abd1-45c7-4e43-bb3c-6fff8cf44334"
	Nov 01 10:22:37 default-k8s-diff-port-535119 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:22:37 default-k8s-diff-port-535119 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:22:37 default-k8s-diff-port-535119 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:22:37 default-k8s-diff-port-535119 systemd[1]: kubelet.service: Consumed 1.940s CPU time.
	
	
	==> kubernetes-dashboard [367e9352dc1b956483b3e41c8f68f7c436e82a3befeac0422a3111dc25ea1263] <==
	2025/11/01 10:21:57 Using namespace: kubernetes-dashboard
	2025/11/01 10:21:57 Using in-cluster config to connect to apiserver
	2025/11/01 10:21:57 Using secret token for csrf signing
	2025/11/01 10:21:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:21:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:21:57 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:21:57 Generating JWE encryption key
	2025/11/01 10:21:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:21:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:21:57 Initializing JWE encryption key from synchronized object
	2025/11/01 10:21:57 Creating in-cluster Sidecar client
	2025/11/01 10:21:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:21:57 Serving insecurely on HTTP port: 9090
	2025/11/01 10:22:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:21:57 Starting overwatch
	
	
	==> storage-provisioner [598929db993d5341c4bb379640a12b18a006b9760ad6912747a9d78467eab995] <==
	I1101 10:22:18.050475       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:22:18.058767       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:22:18.058815       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:22:18.061373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:21.516390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:25.777515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:29.376576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:32.430744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:35.453232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:35.459873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:22:35.460062       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:22:35.460233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ac9104e3-50b9-4617-bb83-1ca4dc037b6d", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-535119_4febf099-037f-4a5e-8bf2-a80040994b27 became leader
	I1101 10:22:35.460284       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-535119_4febf099-037f-4a5e-8bf2-a80040994b27!
	W1101 10:22:35.462801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:35.466828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:22:35.561161       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-535119_4febf099-037f-4a5e-8bf2-a80040994b27!
	W1101 10:22:37.470923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:37.474982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:39.478771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:39.483883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [63992ee9b84cb33f02c4ed9eb2ca3b69146006e8c4eda10c81fd8d5c3fd45734] <==
	I1101 10:21:47.201672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:22:17.206780       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-535119 -n default-k8s-diff-port-535119
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-535119 -n default-k8s-diff-port-535119: exit status 2 (394.823953ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-535119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-535119
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-535119:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9",
	        "Created": "2025-11-01T10:20:27.432288023Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 784759,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:21:36.480851986Z",
	            "FinishedAt": "2025-11-01T10:21:35.447458484Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9/hostname",
	        "HostsPath": "/var/lib/docker/containers/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9/hosts",
	        "LogPath": "/var/lib/docker/containers/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9/709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9-json.log",
	        "Name": "/default-k8s-diff-port-535119",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-535119:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-535119",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "709c1dd683650c73e3e3369f749606913021d54039aaf3a55db21f73a79805b9",
	                "LowerDir": "/var/lib/docker/overlay2/e9a0c3ffe8511d599910c2afa408a05e6eafb69152218c2a88b5d554575b9de6-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e9a0c3ffe8511d599910c2afa408a05e6eafb69152218c2a88b5d554575b9de6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e9a0c3ffe8511d599910c2afa408a05e6eafb69152218c2a88b5d554575b9de6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e9a0c3ffe8511d599910c2afa408a05e6eafb69152218c2a88b5d554575b9de6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-535119",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-535119/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-535119",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-535119",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-535119",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8d2674fbdba086459517dc68b44955fb1fb88643c4b55f175f960d6e7fec5032",
	            "SandboxKey": "/var/run/docker/netns/8d2674fbdba0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33218"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33219"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33222"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33220"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33221"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-535119": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:07:15:ff:64:17",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "adb717c923a7eb081a40be81c8474558a336c362715ad5409671064c3146fad7",
	                    "EndpointID": "a1fb45f892bb846ab0d7f5f7b490d931d5be5fbf3ccb48454c7ad8fca3040a3e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-535119",
	                        "709c1dd68365"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-535119 -n default-k8s-diff-port-535119
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-535119 -n default-k8s-diff-port-535119: exit status 2 (361.298829ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-535119 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-535119 logs -n 25: (3.205635374s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-456743 sudo journalctl -xeu kubelet --all --full --no-pager                                                                    │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl cat docker --no-pager                                                                                    │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /etc/docker/daemon.json                                                                                        │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo docker system info                                                                                                 │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cri-dockerd --version                                                                                              │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl cat containerd --no-pager                                                                                │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /etc/containerd/config.toml                                                                                    │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo containerd config dump                                                                                             │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl cat crio --no-pager                                                                                      │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo crio config                                                                                                        │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ delete  │ -p auto-456743                                                                                                                         │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ image   │ default-k8s-diff-port-535119 image list --format=json                                                                                  │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ pause   │ -p default-k8s-diff-port-535119 --alsologtostderr -v=1                                                                                 │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ start   │ -p calico-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-456743                │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:22:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:22:39.426748  801153 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:22:39.427130  801153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:22:39.427140  801153 out.go:374] Setting ErrFile to fd 2...
	I1101 10:22:39.427148  801153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:22:39.427502  801153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:22:39.428380  801153 out.go:368] Setting JSON to false
	I1101 10:22:39.430093  801153 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11096,"bootTime":1761981463,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:22:39.430238  801153 start.go:143] virtualization: kvm guest
	I1101 10:22:39.432643  801153 out.go:179] * [calico-456743] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:22:39.433991  801153 notify.go:221] Checking for updates...
	I1101 10:22:39.434023  801153 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:22:39.435975  801153 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:22:39.437039  801153 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:22:39.438076  801153 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:22:39.439072  801153 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:22:39.440096  801153 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:22:39.442309  801153 config.go:182] Loaded profile config "default-k8s-diff-port-535119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:39.442449  801153 config.go:182] Loaded profile config "embed-certs-678014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:39.442571  801153 config.go:182] Loaded profile config "kindnet-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:39.442718  801153 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:22:39.472034  801153 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:22:39.472143  801153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:22:39.546867  801153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:22:39.534589199 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:22:39.546978  801153 docker.go:319] overlay module found
	I1101 10:22:39.548335  801153 out.go:179] * Using the docker driver based on user configuration
	I1101 10:22:39.549226  801153 start.go:309] selected driver: docker
	I1101 10:22:39.549246  801153 start.go:930] validating driver "docker" against <nil>
	I1101 10:22:39.549261  801153 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:22:39.549809  801153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:22:39.618417  801153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:22:39.606236085 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:22:39.618726  801153 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:22:39.619053  801153 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:22:39.620465  801153 out.go:179] * Using Docker driver with root privileges
	I1101 10:22:39.621492  801153 cni.go:84] Creating CNI manager for "calico"
	I1101 10:22:39.621517  801153 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1101 10:22:39.621619  801153 start.go:353] cluster config:
	{Name:calico-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-456743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:22:39.623223  801153 out.go:179] * Starting "calico-456743" primary control-plane node in "calico-456743" cluster
	I1101 10:22:39.624210  801153 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:22:39.625150  801153 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:22:39.626015  801153 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:22:39.626071  801153 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:22:39.626089  801153 cache.go:59] Caching tarball of preloaded images
	I1101 10:22:39.626108  801153 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:22:39.626249  801153 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:22:39.626263  801153 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:22:39.626400  801153 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/config.json ...
	I1101 10:22:39.626431  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/config.json: {Name:mk90135cbb56ada87dd0b110a6a847dd8a879021 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:39.650437  801153 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:22:39.650463  801153 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:22:39.650486  801153 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:22:39.650535  801153 start.go:360] acquireMachinesLock for calico-456743: {Name:mk47d56ccd1a80cdcd4a1e14702b1203b633ff91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:22:39.650665  801153 start.go:364] duration metric: took 102.257µs to acquireMachinesLock for "calico-456743"
	I1101 10:22:39.650699  801153 start.go:93] Provisioning new machine with config: &{Name:calico-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-456743 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:22:39.650785  801153 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:22:38.587657  793145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:22:39.087824  793145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:22:39.588277  793145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:22:39.673505  793145 kubeadm.go:1114] duration metric: took 4.182845298s to wait for elevateKubeSystemPrivileges
	I1101 10:22:39.673541  793145 kubeadm.go:403] duration metric: took 16.176743829s to StartCluster
	I1101 10:22:39.673566  793145 settings.go:142] acquiring lock: {Name:mkbef7883fa7ea2e62392c650cc73e4ea6f7318c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:39.673634  793145 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:22:39.676184  793145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/kubeconfig: {Name:mkb31f39102e998872d10e093b6e905c3e5b495f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:39.676524  793145 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:22:39.677729  793145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:22:39.677953  793145 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:22:39.678036  793145 config.go:182] Loaded profile config "kindnet-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:39.678070  793145 addons.go:70] Setting storage-provisioner=true in profile "kindnet-456743"
	I1101 10:22:39.678082  793145 addons.go:70] Setting default-storageclass=true in profile "kindnet-456743"
	I1101 10:22:39.678094  793145 addons.go:239] Setting addon storage-provisioner=true in "kindnet-456743"
	I1101 10:22:39.678096  793145 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-456743"
	I1101 10:22:39.678129  793145 host.go:66] Checking if "kindnet-456743" exists ...
	I1101 10:22:39.678482  793145 cli_runner.go:164] Run: docker container inspect kindnet-456743 --format={{.State.Status}}
	I1101 10:22:39.678792  793145 cli_runner.go:164] Run: docker container inspect kindnet-456743 --format={{.State.Status}}
	I1101 10:22:39.678890  793145 out.go:179] * Verifying Kubernetes components...
	I1101 10:22:39.679630  793145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:22:39.706731  793145 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:22:39.707765  793145 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:22:39.707791  793145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:22:39.707913  793145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-456743
	I1101 10:22:39.713094  793145 addons.go:239] Setting addon default-storageclass=true in "kindnet-456743"
	I1101 10:22:39.713149  793145 host.go:66] Checking if "kindnet-456743" exists ...
	I1101 10:22:39.713653  793145 cli_runner.go:164] Run: docker container inspect kindnet-456743 --format={{.State.Status}}
	I1101 10:22:39.769953  793145 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:22:39.770737  793145 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:22:39.771341  793145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-456743
	I1101 10:22:39.772784  793145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33228 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/kindnet-456743/id_rsa Username:docker}
	I1101 10:22:39.805779  793145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33228 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/kindnet-456743/id_rsa Username:docker}
	I1101 10:22:39.832919  793145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:22:39.881721  793145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:22:39.921327  793145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:22:39.949916  793145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:22:40.101621  793145 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1101 10:22:40.103063  793145 node_ready.go:35] waiting up to 15m0s for node "kindnet-456743" to be "Ready" ...
	I1101 10:22:40.360674  793145 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1101 10:22:38.444709  788767 pod_ready.go:104] pod "coredns-66bc5c9577-vlf7q" is not "Ready", error: <nil>
	I1101 10:22:39.949815  788767 pod_ready.go:94] pod "coredns-66bc5c9577-vlf7q" is "Ready"
	I1101 10:22:39.949973  788767 pod_ready.go:86] duration metric: took 35.01204927s for pod "coredns-66bc5c9577-vlf7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:39.954683  788767 pod_ready.go:83] waiting for pod "etcd-embed-certs-678014" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:39.961697  788767 pod_ready.go:94] pod "etcd-embed-certs-678014" is "Ready"
	I1101 10:22:39.961734  788767 pod_ready.go:86] duration metric: took 6.975469ms for pod "etcd-embed-certs-678014" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:39.965862  788767 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-678014" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:39.973656  788767 pod_ready.go:94] pod "kube-apiserver-embed-certs-678014" is "Ready"
	I1101 10:22:39.973687  788767 pod_ready.go:86] duration metric: took 7.788363ms for pod "kube-apiserver-embed-certs-678014" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:39.976346  788767 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-678014" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:40.142544  788767 pod_ready.go:94] pod "kube-controller-manager-embed-certs-678014" is "Ready"
	I1101 10:22:40.142593  788767 pod_ready.go:86] duration metric: took 166.218848ms for pod "kube-controller-manager-embed-certs-678014" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:40.343410  788767 pod_ready.go:83] waiting for pod "kube-proxy-tlw2d" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:40.742336  788767 pod_ready.go:94] pod "kube-proxy-tlw2d" is "Ready"
	I1101 10:22:40.742362  788767 pod_ready.go:86] duration metric: took 398.921863ms for pod "kube-proxy-tlw2d" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:40.941924  788767 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-678014" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:41.342179  788767 pod_ready.go:94] pod "kube-scheduler-embed-certs-678014" is "Ready"
	I1101 10:22:41.342205  788767 pod_ready.go:86] duration metric: took 400.25097ms for pod "kube-scheduler-embed-certs-678014" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:41.342216  788767 pod_ready.go:40] duration metric: took 36.408457252s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:22:41.406203  788767 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:22:41.411519  788767 out.go:179] * Done! kubectl is now configured to use "embed-certs-678014" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:22:03 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:03.357996658Z" level=info msg="Started container" PID=1748 containerID=5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7/dashboard-metrics-scraper id=f883c8cd-f37c-4c93-aaf9-da64d926b4a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3964e2310a4737c1fca666a42d7c6db7926ea1d8e42129e0f090f0a1c14adb3c
	Nov 01 10:22:03 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:03.952810229Z" level=info msg="Removing container: 3cadffecceef447454f37bfe36e6f4adb00b5928fe1dd782f33fa78cde9e8dc9" id=94a1364c-1a60-46cb-8f10-be800238a702 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:22:03 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:03.970998337Z" level=info msg="Removed container 3cadffecceef447454f37bfe36e6f4adb00b5928fe1dd782f33fa78cde9e8dc9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7/dashboard-metrics-scraper" id=94a1364c-1a60-46cb-8f10-be800238a702 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:22:17 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:17.995576926Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1ce73a9c-8bd6-47bf-bea5-1aa2476ce6ea name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:17 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:17.996633058Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=61a94c91-d874-4069-9d5a-9caa027d0fdb name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:17 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:17.997829947Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2e31ce1f-4682-4f5d-b519-dbf295aed339 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:17 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:17.998040858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.003948955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.004163602Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/28f9c884743b4b73e2aa0e8db09eb652b032e7b3665e6d54fc283539f019de78/merged/etc/passwd: no such file or directory"
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.00420543Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/28f9c884743b4b73e2aa0e8db09eb652b032e7b3665e6d54fc283539f019de78/merged/etc/group: no such file or directory"
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.004542142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.033588573Z" level=info msg="Created container 598929db993d5341c4bb379640a12b18a006b9760ad6912747a9d78467eab995: kube-system/storage-provisioner/storage-provisioner" id=2e31ce1f-4682-4f5d-b519-dbf295aed339 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.034389704Z" level=info msg="Starting container: 598929db993d5341c4bb379640a12b18a006b9760ad6912747a9d78467eab995" id=22100c98-b57a-43d0-91f9-b4478617f3f8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:22:18 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:18.036667347Z" level=info msg="Started container" PID=1762 containerID=598929db993d5341c4bb379640a12b18a006b9760ad6912747a9d78467eab995 description=kube-system/storage-provisioner/storage-provisioner id=22100c98-b57a-43d0-91f9-b4478617f3f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15fe6fdd58f947a6f6f2060bdedb454d57e35e568b215902a8399436e6a07229
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.825237072Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2d9adc30-bc64-469b-8dad-daaa5bc75829 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.826353289Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e6b47d22-5743-4dfd-b566-9547572bcc97 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.827371326Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7/dashboard-metrics-scraper" id=ac4d9184-c4c5-4426-9a1c-0e0c23ed8099 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.827523318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.834396106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.835089017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.872818183Z" level=info msg="Created container 26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7/dashboard-metrics-scraper" id=ac4d9184-c4c5-4426-9a1c-0e0c23ed8099 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.873659778Z" level=info msg="Starting container: 26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21" id=93a22018-e72b-423c-acd9-6b4448f1027c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:22:26 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:26.87646548Z" level=info msg="Started container" PID=1797 containerID=26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7/dashboard-metrics-scraper id=93a22018-e72b-423c-acd9-6b4448f1027c name=/runtime.v1.RuntimeService/StartContainer sandboxID=3964e2310a4737c1fca666a42d7c6db7926ea1d8e42129e0f090f0a1c14adb3c
	Nov 01 10:22:27 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:27.024180307Z" level=info msg="Removing container: 5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d" id=97eb0b4a-e36a-4384-a4e6-2264ce5c49ff name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:22:27 default-k8s-diff-port-535119 crio[562]: time="2025-11-01T10:22:27.039983698Z" level=info msg="Removed container 5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7/dashboard-metrics-scraper" id=97eb0b4a-e36a-4384-a4e6-2264ce5c49ff name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	26092099342bb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   3                   3964e2310a473       dashboard-metrics-scraper-6ffb444bf9-plkm7             kubernetes-dashboard
	598929db993d5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   15fe6fdd58f94       storage-provisioner                                    kube-system
	367e9352dc1b9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   a4cb280f04767       kubernetes-dashboard-855c9754f9-mgn6f                  kubernetes-dashboard
	5f74118a7dcc1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   a1c3396c5fb60       busybox                                                default
	f0858d36c6624       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   a0d4bd0be9084       coredns-66bc5c9577-c4s2q                               kube-system
	63992ee9b84cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   15fe6fdd58f94       storage-provisioner                                    kube-system
	4d191bebc00a7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   1ce1f924eda8a       kube-proxy-6tl8q                                       kube-system
	5e268959350f6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   af75297061f94       kindnet-fvr2t                                          kube-system
	ca9bddec19806       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   c758509cf8c30       etcd-default-k8s-diff-port-535119                      kube-system
	48823e7d320e5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   7404b45af1ea5       kube-scheduler-default-k8s-diff-port-535119            kube-system
	d8dbc23691e83       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   1846f03df77ea       kube-controller-manager-default-k8s-diff-port-535119   kube-system
	789a7612b3f36       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   1b7bf1623769e       kube-apiserver-default-k8s-diff-port-535119            kube-system
	
	
	==> coredns [f0858d36c66240ef67cdb11f52c1eeb8c54ef043dfd7a32c19597f2b57e5280e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48596 - 44903 "HINFO IN 6803641912873057387.3869622318025249585. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032000292s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-535119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-535119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=default-k8s-diff-port-535119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_20_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:20:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-535119
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:22:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:22:16 +0000   Sat, 01 Nov 2025 10:20:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:22:16 +0000   Sat, 01 Nov 2025 10:20:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:22:16 +0000   Sat, 01 Nov 2025 10:20:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:22:16 +0000   Sat, 01 Nov 2025 10:21:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-535119
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a6fa098f-22f7-43f7-a2bd-0a700ca3d7aa
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-c4s2q                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-default-k8s-diff-port-535119                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-fvr2t                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-default-k8s-diff-port-535119             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-535119    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-6tl8q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-default-k8s-diff-port-535119             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-plkm7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mgn6f                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 111s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s               node-controller  Node default-k8s-diff-port-535119 event: Registered Node default-k8s-diff-port-535119 in Controller
	  Normal  NodeReady                100s               kubelet          Node default-k8s-diff-port-535119 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-535119 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node default-k8s-diff-port-535119 event: Registered Node default-k8s-diff-port-535119 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [ca9bddec198066b48591adadfc97f2cc7f80b78bc9f559075ca0db64b5aea9f8] <==
	{"level":"warn","ts":"2025-11-01T10:21:44.983924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:44.992024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:44.999316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.007968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.015370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.025594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.034268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.042068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.049779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.058644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.065410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.073235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.081920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.089350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.098339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.105951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.112820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.126266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.133199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.141974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.163443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.174752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.184028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:21:45.245909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41630","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:22:13.441310Z","caller":"traceutil/trace.go:172","msg":"trace[408998277] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"132.778794ms","start":"2025-11-01T10:22:13.308495Z","end":"2025-11-01T10:22:13.441274Z","steps":["trace[408998277] 'process raft request'  (duration: 130.426475ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:22:43 up  3:04,  0 user,  load average: 4.01, 3.77, 2.99
	Linux default-k8s-diff-port-535119 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e268959350f62859e2f83296fa5f9105de7d5d5542f85a320b57d5161ea134a] <==
	I1101 10:21:47.372203       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:21:47.372483       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:21:47.372653       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:21:47.372672       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:21:47.372698       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:21:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:21:47.668954       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:21:47.669029       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:21:47.669046       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:21:47.669230       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:21:47.969674       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:21:47.969848       1 metrics.go:72] Registering metrics
	I1101 10:21:47.969969       1 controller.go:711] "Syncing nftables rules"
	I1101 10:21:57.576884       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:21:57.576958       1 main.go:301] handling current node
	I1101 10:22:07.579233       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:22:07.579274       1 main.go:301] handling current node
	I1101 10:22:17.577062       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:22:17.577107       1 main.go:301] handling current node
	I1101 10:22:27.577002       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:22:27.577050       1 main.go:301] handling current node
	I1101 10:22:37.579978       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:22:37.580033       1 main.go:301] handling current node
	
	
	==> kube-apiserver [789a7612b3f36c70178a0d6094a84b47c8beecea55e7d6586762c03604f5c2ad] <==
	I1101 10:21:45.908637       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:21:45.908725       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:21:45.908756       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:21:45.908781       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:21:45.905898       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:21:45.909442       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:21:45.916012       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:21:45.916165       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:21:45.924147       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 10:21:45.938054       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:21:45.940026       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:21:45.948473       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:21:45.964633       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:21:46.333746       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:21:46.373739       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:21:46.403861       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:21:46.421338       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:21:46.437083       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:21:46.498364       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.204.13"}
	I1101 10:21:46.512170       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.173.79"}
	I1101 10:21:46.801694       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:21:49.376371       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:21:49.726017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:21:49.726017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:21:49.776615       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d8dbc23691e83e3bbb62dac3c1cc0308bcfb69a5f2729f19473bfa3c56e8bea3] <==
	I1101 10:21:49.222073       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:21:49.222356       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:21:49.222525       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:21:49.224130       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:21:49.224345       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:21:49.225431       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:21:49.225552       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:21:49.227908       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:21:49.229198       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:21:49.229252       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:21:49.229284       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:21:49.229292       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:21:49.229297       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:21:49.230308       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:21:49.231496       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:21:49.233647       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:21:49.233681       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:21:49.233776       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:21:49.234944       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:21:49.238159       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:21:49.240391       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:21:49.245575       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:21:49.245597       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:21:49.245605       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:21:49.249125       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4d191bebc00a77146f23a5ffeba9c52ca18645185d9f87fdeb8cedc3bfe48be8] <==
	I1101 10:21:47.233674       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:21:47.302799       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:21:47.403422       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:21:47.403461       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:21:47.403569       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:21:47.423184       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:21:47.423257       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:21:47.429617       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:21:47.430096       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:21:47.430138       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:21:47.431714       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:21:47.431795       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:21:47.431827       1 config.go:309] "Starting node config controller"
	I1101 10:21:47.431830       1 config.go:200] "Starting service config controller"
	I1101 10:21:47.431852       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:21:47.431829       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:21:47.431865       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:21:47.431861       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:21:47.431871       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:21:47.532231       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:21:47.532257       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:21:47.532296       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [48823e7d320e569e3fa912f75beb7c80e3cfa0240efaf9018fddf16a5c86137b] <==
	I1101 10:21:44.253667       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:21:45.818166       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:21:45.818201       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:21:45.818213       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:21:45.818222       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:21:45.926485       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:21:45.926583       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:21:45.930246       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:21:45.930494       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:21:45.930401       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:21:45.930555       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:21:46.031665       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:21:52 default-k8s-diff-port-535119 kubelet[717]: I1101 10:21:52.889021     717 scope.go:117] "RemoveContainer" containerID="beecce84e8bf53b7edd5808620bc33297cd1f8853907f494b92cac8f0f2800ce"
	Nov 01 10:21:53 default-k8s-diff-port-535119 kubelet[717]: I1101 10:21:53.505383     717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:21:53 default-k8s-diff-port-535119 kubelet[717]: I1101 10:21:53.899001     717 scope.go:117] "RemoveContainer" containerID="beecce84e8bf53b7edd5808620bc33297cd1f8853907f494b92cac8f0f2800ce"
	Nov 01 10:21:53 default-k8s-diff-port-535119 kubelet[717]: I1101 10:21:53.899334     717 scope.go:117] "RemoveContainer" containerID="3cadffecceef447454f37bfe36e6f4adb00b5928fe1dd782f33fa78cde9e8dc9"
	Nov 01 10:21:53 default-k8s-diff-port-535119 kubelet[717]: E1101 10:21:53.899506     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-plkm7_kubernetes-dashboard(e860abd1-45c7-4e43-bb3c-6fff8cf44334)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7" podUID="e860abd1-45c7-4e43-bb3c-6fff8cf44334"
	Nov 01 10:21:54 default-k8s-diff-port-535119 kubelet[717]: I1101 10:21:54.908795     717 scope.go:117] "RemoveContainer" containerID="3cadffecceef447454f37bfe36e6f4adb00b5928fe1dd782f33fa78cde9e8dc9"
	Nov 01 10:21:54 default-k8s-diff-port-535119 kubelet[717]: E1101 10:21:54.908983     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-plkm7_kubernetes-dashboard(e860abd1-45c7-4e43-bb3c-6fff8cf44334)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7" podUID="e860abd1-45c7-4e43-bb3c-6fff8cf44334"
	Nov 01 10:21:57 default-k8s-diff-port-535119 kubelet[717]: I1101 10:21:57.929691     717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mgn6f" podStartSLOduration=2.060232897 podStartE2EDuration="8.929669954s" podCreationTimestamp="2025-11-01 10:21:49 +0000 UTC" firstStartedPulling="2025-11-01 10:21:50.180324749 +0000 UTC m=+7.461441256" lastFinishedPulling="2025-11-01 10:21:57.049761789 +0000 UTC m=+14.330878313" observedRunningTime="2025-11-01 10:21:57.929556078 +0000 UTC m=+15.210672587" watchObservedRunningTime="2025-11-01 10:21:57.929669954 +0000 UTC m=+15.210786490"
	Nov 01 10:22:03 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:03.301361     717 scope.go:117] "RemoveContainer" containerID="3cadffecceef447454f37bfe36e6f4adb00b5928fe1dd782f33fa78cde9e8dc9"
	Nov 01 10:22:03 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:03.949733     717 scope.go:117] "RemoveContainer" containerID="3cadffecceef447454f37bfe36e6f4adb00b5928fe1dd782f33fa78cde9e8dc9"
	Nov 01 10:22:03 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:03.950212     717 scope.go:117] "RemoveContainer" containerID="5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d"
	Nov 01 10:22:03 default-k8s-diff-port-535119 kubelet[717]: E1101 10:22:03.950445     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-plkm7_kubernetes-dashboard(e860abd1-45c7-4e43-bb3c-6fff8cf44334)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7" podUID="e860abd1-45c7-4e43-bb3c-6fff8cf44334"
	Nov 01 10:22:13 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:13.300940     717 scope.go:117] "RemoveContainer" containerID="5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d"
	Nov 01 10:22:13 default-k8s-diff-port-535119 kubelet[717]: E1101 10:22:13.301238     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-plkm7_kubernetes-dashboard(e860abd1-45c7-4e43-bb3c-6fff8cf44334)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7" podUID="e860abd1-45c7-4e43-bb3c-6fff8cf44334"
	Nov 01 10:22:17 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:17.995186     717 scope.go:117] "RemoveContainer" containerID="63992ee9b84cb33f02c4ed9eb2ca3b69146006e8c4eda10c81fd8d5c3fd45734"
	Nov 01 10:22:26 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:26.824682     717 scope.go:117] "RemoveContainer" containerID="5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d"
	Nov 01 10:22:27 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:27.022603     717 scope.go:117] "RemoveContainer" containerID="5f47118560ede9e2e6893246516556e2d4ab0551b81b169bfe4387be41a44e9d"
	Nov 01 10:22:27 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:27.022961     717 scope.go:117] "RemoveContainer" containerID="26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21"
	Nov 01 10:22:27 default-k8s-diff-port-535119 kubelet[717]: E1101 10:22:27.023335     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-plkm7_kubernetes-dashboard(e860abd1-45c7-4e43-bb3c-6fff8cf44334)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7" podUID="e860abd1-45c7-4e43-bb3c-6fff8cf44334"
	Nov 01 10:22:33 default-k8s-diff-port-535119 kubelet[717]: I1101 10:22:33.301237     717 scope.go:117] "RemoveContainer" containerID="26092099342bb284f6971dbcc78f3e24555d760f966d317a94823c9b2a3f1c21"
	Nov 01 10:22:33 default-k8s-diff-port-535119 kubelet[717]: E1101 10:22:33.301484     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-plkm7_kubernetes-dashboard(e860abd1-45c7-4e43-bb3c-6fff8cf44334)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-plkm7" podUID="e860abd1-45c7-4e43-bb3c-6fff8cf44334"
	Nov 01 10:22:37 default-k8s-diff-port-535119 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:22:37 default-k8s-diff-port-535119 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:22:37 default-k8s-diff-port-535119 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:22:37 default-k8s-diff-port-535119 systemd[1]: kubelet.service: Consumed 1.940s CPU time.
	
	
	==> kubernetes-dashboard [367e9352dc1b956483b3e41c8f68f7c436e82a3befeac0422a3111dc25ea1263] <==
	2025/11/01 10:21:57 Using namespace: kubernetes-dashboard
	2025/11/01 10:21:57 Using in-cluster config to connect to apiserver
	2025/11/01 10:21:57 Using secret token for csrf signing
	2025/11/01 10:21:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:21:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:21:57 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:21:57 Generating JWE encryption key
	2025/11/01 10:21:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:21:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:21:57 Initializing JWE encryption key from synchronized object
	2025/11/01 10:21:57 Creating in-cluster Sidecar client
	2025/11/01 10:21:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:21:57 Serving insecurely on HTTP port: 9090
	2025/11/01 10:22:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:21:57 Starting overwatch
	
	
	==> storage-provisioner [598929db993d5341c4bb379640a12b18a006b9760ad6912747a9d78467eab995] <==
	I1101 10:22:18.050475       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:22:18.058767       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:22:18.058815       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:22:18.061373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:21.516390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:25.777515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:29.376576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:32.430744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:35.453232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:35.459873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:22:35.460062       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:22:35.460233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ac9104e3-50b9-4617-bb83-1ca4dc037b6d", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-535119_4febf099-037f-4a5e-8bf2-a80040994b27 became leader
	I1101 10:22:35.460284       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-535119_4febf099-037f-4a5e-8bf2-a80040994b27!
	W1101 10:22:35.462801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:35.466828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:22:35.561161       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-535119_4febf099-037f-4a5e-8bf2-a80040994b27!
	W1101 10:22:37.470923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:37.474982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:39.478771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:39.483883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:41.487297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:41.492080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:43.495559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:43.602363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [63992ee9b84cb33f02c4ed9eb2ca3b69146006e8c4eda10c81fd8d5c3fd45734] <==
	I1101 10:21:47.201672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:22:17.206780       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-535119 -n default-k8s-diff-port-535119
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-535119 -n default-k8s-diff-port-535119: exit status 2 (381.549408ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-535119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-678014 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-678014 --alsologtostderr -v=1: exit status 80 (2.444074065s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-678014 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:22:53.252450  805929 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:22:53.252805  805929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:22:53.252816  805929 out.go:374] Setting ErrFile to fd 2...
	I1101 10:22:53.252821  805929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:22:53.253114  805929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:22:53.253452  805929 out.go:368] Setting JSON to false
	I1101 10:22:53.253510  805929 mustload.go:66] Loading cluster: embed-certs-678014
	I1101 10:22:53.253905  805929 config.go:182] Loaded profile config "embed-certs-678014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:53.254332  805929 cli_runner.go:164] Run: docker container inspect embed-certs-678014 --format={{.State.Status}}
	I1101 10:22:53.273051  805929 host.go:66] Checking if "embed-certs-678014" exists ...
	I1101 10:22:53.273437  805929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:22:53.340334  805929 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-01 10:22:53.329256902 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:22:53.341002  805929 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-678014 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:22:53.344543  805929 out.go:179] * Pausing node embed-certs-678014 ... 
	I1101 10:22:53.345608  805929 host.go:66] Checking if "embed-certs-678014" exists ...
	I1101 10:22:53.345927  805929 ssh_runner.go:195] Run: systemctl --version
	I1101 10:22:53.345975  805929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-678014
	I1101 10:22:53.364041  805929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/embed-certs-678014/id_rsa Username:docker}
	I1101 10:22:53.471970  805929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:22:53.487341  805929 pause.go:52] kubelet running: true
	I1101 10:22:53.487406  805929 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:22:53.658919  805929 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:22:53.659032  805929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:22:53.737424  805929 cri.go:89] found id: "5be0079577779c724e1f3452cf44867d403ed275e921781e8467e360c995dfed"
	I1101 10:22:53.737457  805929 cri.go:89] found id: "e9c85103604609c36cfb00de71bfe70f095051d470ae83fe1db5422a8554bc65"
	I1101 10:22:53.737462  805929 cri.go:89] found id: "090328e2d66c9eab8a50d6179bde736e4e3c793c38917b3d82a09df65c4b1ee2"
	I1101 10:22:53.737492  805929 cri.go:89] found id: "7b3d50aff91266580f138509e805b375d6b764cfe7138fdc0bb1b3780d21f7e0"
	I1101 10:22:53.737497  805929 cri.go:89] found id: "901ec54f9139c34f1066587c7237ab3984a2c279347d55a1d0b038574bbca217"
	I1101 10:22:53.737501  805929 cri.go:89] found id: "77c8dcd2cdbb15ad48c01e45cd25792e208735c6eda9f44bc1fa9ab853e0081c"
	I1101 10:22:53.737505  805929 cri.go:89] found id: "9882b066954b83924bdc61795f906efe75b16f0dcdb7b9d8bce879789c8743e3"
	I1101 10:22:53.737509  805929 cri.go:89] found id: "bb7743b9e3f295728cb34054b001eac220d6549f08d9f5e304789213cc644bae"
	I1101 10:22:53.737513  805929 cri.go:89] found id: "a4e56bd25efad002d1eb660d328f3fda9e93ba58bb33f2e388635b902755f1e9"
	I1101 10:22:53.737521  805929 cri.go:89] found id: "9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629"
	I1101 10:22:53.737525  805929 cri.go:89] found id: "6ff08c3f9890015052d9adbb802e29cfd38776e9a69671ca1aacbe3ea7955d0a"
	I1101 10:22:53.737529  805929 cri.go:89] found id: ""
	I1101 10:22:53.737578  805929 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:22:53.752970  805929 retry.go:31] will retry after 242.788734ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:22:53Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:22:53.996565  805929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:22:54.010493  805929 pause.go:52] kubelet running: false
	I1101 10:22:54.010556  805929 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:22:54.151861  805929 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:22:54.151962  805929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:22:54.231253  805929 cri.go:89] found id: "5be0079577779c724e1f3452cf44867d403ed275e921781e8467e360c995dfed"
	I1101 10:22:54.231280  805929 cri.go:89] found id: "e9c85103604609c36cfb00de71bfe70f095051d470ae83fe1db5422a8554bc65"
	I1101 10:22:54.231286  805929 cri.go:89] found id: "090328e2d66c9eab8a50d6179bde736e4e3c793c38917b3d82a09df65c4b1ee2"
	I1101 10:22:54.231291  805929 cri.go:89] found id: "7b3d50aff91266580f138509e805b375d6b764cfe7138fdc0bb1b3780d21f7e0"
	I1101 10:22:54.231296  805929 cri.go:89] found id: "901ec54f9139c34f1066587c7237ab3984a2c279347d55a1d0b038574bbca217"
	I1101 10:22:54.231303  805929 cri.go:89] found id: "77c8dcd2cdbb15ad48c01e45cd25792e208735c6eda9f44bc1fa9ab853e0081c"
	I1101 10:22:54.231308  805929 cri.go:89] found id: "9882b066954b83924bdc61795f906efe75b16f0dcdb7b9d8bce879789c8743e3"
	I1101 10:22:54.231312  805929 cri.go:89] found id: "bb7743b9e3f295728cb34054b001eac220d6549f08d9f5e304789213cc644bae"
	I1101 10:22:54.231316  805929 cri.go:89] found id: "a4e56bd25efad002d1eb660d328f3fda9e93ba58bb33f2e388635b902755f1e9"
	I1101 10:22:54.231329  805929 cri.go:89] found id: "9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629"
	I1101 10:22:54.231335  805929 cri.go:89] found id: "6ff08c3f9890015052d9adbb802e29cfd38776e9a69671ca1aacbe3ea7955d0a"
	I1101 10:22:54.231338  805929 cri.go:89] found id: ""
	I1101 10:22:54.231377  805929 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:22:54.243827  805929 retry.go:31] will retry after 411.96998ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:22:54Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:22:54.656372  805929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:22:54.688955  805929 pause.go:52] kubelet running: false
	I1101 10:22:54.689030  805929 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:22:54.833107  805929 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:22:54.833202  805929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:22:54.908385  805929 cri.go:89] found id: "5be0079577779c724e1f3452cf44867d403ed275e921781e8467e360c995dfed"
	I1101 10:22:54.908413  805929 cri.go:89] found id: "e9c85103604609c36cfb00de71bfe70f095051d470ae83fe1db5422a8554bc65"
	I1101 10:22:54.908418  805929 cri.go:89] found id: "090328e2d66c9eab8a50d6179bde736e4e3c793c38917b3d82a09df65c4b1ee2"
	I1101 10:22:54.908423  805929 cri.go:89] found id: "7b3d50aff91266580f138509e805b375d6b764cfe7138fdc0bb1b3780d21f7e0"
	I1101 10:22:54.908427  805929 cri.go:89] found id: "901ec54f9139c34f1066587c7237ab3984a2c279347d55a1d0b038574bbca217"
	I1101 10:22:54.908431  805929 cri.go:89] found id: "77c8dcd2cdbb15ad48c01e45cd25792e208735c6eda9f44bc1fa9ab853e0081c"
	I1101 10:22:54.908435  805929 cri.go:89] found id: "9882b066954b83924bdc61795f906efe75b16f0dcdb7b9d8bce879789c8743e3"
	I1101 10:22:54.908438  805929 cri.go:89] found id: "bb7743b9e3f295728cb34054b001eac220d6549f08d9f5e304789213cc644bae"
	I1101 10:22:54.908442  805929 cri.go:89] found id: "a4e56bd25efad002d1eb660d328f3fda9e93ba58bb33f2e388635b902755f1e9"
	I1101 10:22:54.908451  805929 cri.go:89] found id: "9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629"
	I1101 10:22:54.908455  805929 cri.go:89] found id: "6ff08c3f9890015052d9adbb802e29cfd38776e9a69671ca1aacbe3ea7955d0a"
	I1101 10:22:54.908458  805929 cri.go:89] found id: ""
	I1101 10:22:54.908518  805929 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:22:54.922395  805929 retry.go:31] will retry after 292.45648ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:22:54Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:22:55.215764  805929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:22:55.229826  805929 pause.go:52] kubelet running: false
	I1101 10:22:55.229940  805929 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:22:55.381879  805929 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:22:55.381969  805929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:22:55.457314  805929 cri.go:89] found id: "5be0079577779c724e1f3452cf44867d403ed275e921781e8467e360c995dfed"
	I1101 10:22:55.457338  805929 cri.go:89] found id: "e9c85103604609c36cfb00de71bfe70f095051d470ae83fe1db5422a8554bc65"
	I1101 10:22:55.457344  805929 cri.go:89] found id: "090328e2d66c9eab8a50d6179bde736e4e3c793c38917b3d82a09df65c4b1ee2"
	I1101 10:22:55.457349  805929 cri.go:89] found id: "7b3d50aff91266580f138509e805b375d6b764cfe7138fdc0bb1b3780d21f7e0"
	I1101 10:22:55.457353  805929 cri.go:89] found id: "901ec54f9139c34f1066587c7237ab3984a2c279347d55a1d0b038574bbca217"
	I1101 10:22:55.457358  805929 cri.go:89] found id: "77c8dcd2cdbb15ad48c01e45cd25792e208735c6eda9f44bc1fa9ab853e0081c"
	I1101 10:22:55.457361  805929 cri.go:89] found id: "9882b066954b83924bdc61795f906efe75b16f0dcdb7b9d8bce879789c8743e3"
	I1101 10:22:55.457363  805929 cri.go:89] found id: "bb7743b9e3f295728cb34054b001eac220d6549f08d9f5e304789213cc644bae"
	I1101 10:22:55.457367  805929 cri.go:89] found id: "a4e56bd25efad002d1eb660d328f3fda9e93ba58bb33f2e388635b902755f1e9"
	I1101 10:22:55.457373  805929 cri.go:89] found id: "9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629"
	I1101 10:22:55.457375  805929 cri.go:89] found id: "6ff08c3f9890015052d9adbb802e29cfd38776e9a69671ca1aacbe3ea7955d0a"
	I1101 10:22:55.457378  805929 cri.go:89] found id: ""
	I1101 10:22:55.457415  805929 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:22:55.566525  805929 out.go:203] 
	W1101 10:22:55.607137  805929 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:22:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:22:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:22:55.607171  805929 out.go:285] * 
	* 
	W1101 10:22:55.613504  805929 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:22:55.616940  805929 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-678014 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-678014
helpers_test.go:243: (dbg) docker inspect embed-certs-678014:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8",
	        "Created": "2025-11-01T10:20:19.10525333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 788972,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:21:53.744994685Z",
	            "FinishedAt": "2025-11-01T10:21:52.426238128Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8/hosts",
	        "LogPath": "/var/lib/docker/containers/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8-json.log",
	        "Name": "/embed-certs-678014",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-678014:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-678014",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8",
	                "LowerDir": "/var/lib/docker/overlay2/fa1b4666a9401b2b8455588bf0fc7ae32d80d9a94c693ed716d98b8d8b3eeed4-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa1b4666a9401b2b8455588bf0fc7ae32d80d9a94c693ed716d98b8d8b3eeed4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa1b4666a9401b2b8455588bf0fc7ae32d80d9a94c693ed716d98b8d8b3eeed4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa1b4666a9401b2b8455588bf0fc7ae32d80d9a94c693ed716d98b8d8b3eeed4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-678014",
	                "Source": "/var/lib/docker/volumes/embed-certs-678014/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-678014",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-678014",
	                "name.minikube.sigs.k8s.io": "embed-certs-678014",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e09db449d55eda9bc07fb94c95a156f29886cef12615c4350c91812dfcf0fc37",
	            "SandboxKey": "/var/run/docker/netns/e09db449d55e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33223"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33224"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33227"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33225"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33226"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-678014": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:8c:82:3f:e7:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "59c3492c15198878d11d0583248059a9226a90667cc7e5ff7108cce34fc74e86",
	                    "EndpointID": "d9417974da7ee44f4160f0a6771ed01e11add10998f9d7ad123fbd1b006ad337",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-678014",
	                        "7254f01179da"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-678014 -n embed-certs-678014
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-678014 -n embed-certs-678014: exit status 2 (375.25201ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-678014 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-678014 logs -n 25: (1.471303819s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-456743 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo docker system info                                                                                                                             │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cri-dockerd --version                                                                                                                          │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo containerd config dump                                                                                                                         │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo crio config                                                                                                                                    │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ delete  │ -p auto-456743                                                                                                                                                     │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ image   │ default-k8s-diff-port-535119 image list --format=json                                                                                                              │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ pause   │ -p default-k8s-diff-port-535119 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ start   │ -p calico-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-456743                │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-535119                                                                                                                                    │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ delete  │ -p default-k8s-diff-port-535119                                                                                                                                    │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ start   │ -p custom-flannel-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-456743        │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ image   │ embed-certs-678014 image list --format=json                                                                                                                        │ embed-certs-678014           │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ pause   │ -p embed-certs-678014 --alsologtostderr -v=1                                                                                                                       │ embed-certs-678014           │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:22:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:22:50.184215  805154 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:22:50.184501  805154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:22:50.184511  805154 out.go:374] Setting ErrFile to fd 2...
	I1101 10:22:50.184516  805154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:22:50.184737  805154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:22:50.185266  805154 out.go:368] Setting JSON to false
	I1101 10:22:50.186505  805154 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11107,"bootTime":1761981463,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:22:50.186610  805154 start.go:143] virtualization: kvm guest
	I1101 10:22:50.188359  805154 out.go:179] * [custom-flannel-456743] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:22:50.189405  805154 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:22:50.189437  805154 notify.go:221] Checking for updates...
	I1101 10:22:50.191428  805154 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:22:50.192479  805154 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:22:50.193423  805154 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:22:50.194349  805154 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:22:50.195425  805154 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:22:50.197139  805154 config.go:182] Loaded profile config "calico-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:50.197288  805154 config.go:182] Loaded profile config "embed-certs-678014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:50.197426  805154 config.go:182] Loaded profile config "kindnet-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:50.197583  805154 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:22:50.224654  805154 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:22:50.224857  805154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:22:50.287554  805154 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:22:50.27496342 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:22:50.287661  805154 docker.go:319] overlay module found
	I1101 10:22:50.289262  805154 out.go:179] * Using the docker driver based on user configuration
	I1101 10:22:50.290224  805154 start.go:309] selected driver: docker
	I1101 10:22:50.290240  805154 start.go:930] validating driver "docker" against <nil>
	I1101 10:22:50.290267  805154 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:22:50.290931  805154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:22:50.357584  805154 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:22:50.345249025 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:22:50.357904  805154 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:22:50.358163  805154 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:22:50.359570  805154 out.go:179] * Using Docker driver with root privileges
	I1101 10:22:50.360566  805154 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1101 10:22:50.360597  805154 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1101 10:22:50.360678  805154 start.go:353] cluster config:
	{Name:custom-flannel-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-456743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:22:50.361891  805154 out.go:179] * Starting "custom-flannel-456743" primary control-plane node in "custom-flannel-456743" cluster
	I1101 10:22:50.362742  805154 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:22:50.363702  805154 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:22:50.364519  805154 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:22:50.364581  805154 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:22:50.364597  805154 cache.go:59] Caching tarball of preloaded images
	I1101 10:22:50.364619  805154 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:22:50.364735  805154 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:22:50.364752  805154 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:22:50.364936  805154 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/custom-flannel-456743/config.json ...
	I1101 10:22:50.364972  805154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/custom-flannel-456743/config.json: {Name:mk09a24e08f2b3815c861f8baa1a6832ec95b79a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.386898  805154 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:22:50.386925  805154 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:22:50.386944  805154 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:22:50.386973  805154 start.go:360] acquireMachinesLock for custom-flannel-456743: {Name:mk360259e43a462b6efc02b89ea4bcf9f3bf408f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:22:50.387087  805154 start.go:364] duration metric: took 96.375µs to acquireMachinesLock for "custom-flannel-456743"
	I1101 10:22:50.387116  805154 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-456743 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:22:50.387192  805154 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:22:49.584979  801153 cli_runner.go:164] Run: docker network inspect calico-456743 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:22:49.603718  801153 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:22:49.608673  801153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:22:49.620395  801153 kubeadm.go:884] updating cluster {Name:calico-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-456743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:22:49.620591  801153 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:22:49.620665  801153 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:22:49.660884  801153 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:22:49.660912  801153 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:22:49.660974  801153 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:22:49.691085  801153 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:22:49.691110  801153 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:22:49.691119  801153 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:22:49.691211  801153 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-456743 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-456743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1101 10:22:49.691282  801153 ssh_runner.go:195] Run: crio config
	I1101 10:22:49.742395  801153 cni.go:84] Creating CNI manager for "calico"
	I1101 10:22:49.742428  801153 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:22:49.742452  801153 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-456743 NodeName:calico-456743 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:22:49.742601  801153 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-456743"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:22:49.742670  801153 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:22:49.752094  801153 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:22:49.752178  801153 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:22:49.761579  801153 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 10:22:49.776508  801153 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:22:49.793411  801153 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 10:22:49.808383  801153 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:22:49.812646  801153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:22:49.824296  801153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:22:49.916760  801153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:22:49.949786  801153 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743 for IP: 192.168.76.2
	I1101 10:22:49.949827  801153 certs.go:195] generating shared ca certs ...
	I1101 10:22:49.949885  801153 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:49.950073  801153 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:22:49.950112  801153 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:22:49.950125  801153 certs.go:257] generating profile certs ...
	I1101 10:22:49.950197  801153 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/client.key
	I1101 10:22:49.950215  801153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/client.crt with IP's: []
	I1101 10:22:50.186469  801153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/client.crt ...
	I1101 10:22:50.186497  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/client.crt: {Name:mkd1d16610a1f1f6545db441b228216545c8d2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.186681  801153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/client.key ...
	I1101 10:22:50.186696  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/client.key: {Name:mk08a247a28ae504750b07e5b5e9b2fc5bb68145 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.186811  801153 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.key.1972247e
	I1101 10:22:50.186828  801153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.crt.1972247e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 10:22:50.420860  801153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.crt.1972247e ...
	I1101 10:22:50.420901  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.crt.1972247e: {Name:mk4a5406f63293a9c25bd6b43e34fa140d5ba573 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.421124  801153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.key.1972247e ...
	I1101 10:22:50.421147  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.key.1972247e: {Name:mk3ea9a0ec641cc46d0442498893701a7ff31d83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.421280  801153 certs.go:382] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.crt.1972247e -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.crt
	I1101 10:22:50.421388  801153 certs.go:386] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.key.1972247e -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.key
	I1101 10:22:50.421481  801153 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.key
	I1101 10:22:50.421513  801153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.crt with IP's: []
	I1101 10:22:50.523352  801153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.crt ...
	I1101 10:22:50.523385  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.crt: {Name:mkdabf4a4e14609c551b4cc6cbdd216d5c522d1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.523597  801153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.key ...
	I1101 10:22:50.523630  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.key: {Name:mkd575364c466831f95d7cc92f8b3bd08eca5781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.523907  801153 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:22:50.523955  801153 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:22:50.523966  801153 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:22:50.524000  801153 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:22:50.524033  801153 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:22:50.524061  801153 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:22:50.524120  801153 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:22:50.524784  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:22:50.546785  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:22:50.567691  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:22:50.588140  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:22:50.609335  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 10:22:50.629927  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:22:50.649813  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:22:50.670792  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:22:50.692637  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:22:50.715601  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:22:50.738553  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:22:50.761162  801153 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:22:50.776952  801153 ssh_runner.go:195] Run: openssl version
	I1101 10:22:50.784941  801153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:22:50.796584  801153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:22:50.802949  801153 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:22:50.803022  801153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:22:50.845568  801153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:22:50.859764  801153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:22:50.871181  801153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:22:50.875919  801153 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:22:50.875991  801153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:22:50.915318  801153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:22:50.927095  801153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:22:50.938708  801153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:22:50.944682  801153 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:22:50.944758  801153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:22:50.984340  801153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:22:50.994679  801153 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:22:50.999221  801153 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:22:50.999288  801153 kubeadm.go:401] StartCluster: {Name:calico-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-456743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:22:50.999391  801153 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:22:50.999464  801153 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:22:51.034162  801153 cri.go:89] found id: ""
	I1101 10:22:51.034255  801153 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:22:51.045625  801153 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:22:51.057162  801153 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:22:51.057251  801153 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:22:51.069881  801153 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:22:51.069907  801153 kubeadm.go:158] found existing configuration files:
	
	I1101 10:22:51.069971  801153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:22:51.080795  801153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:22:51.080903  801153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:22:51.092996  801153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:22:51.105808  801153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:22:51.105902  801153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:22:51.116235  801153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:22:51.127132  801153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:22:51.127216  801153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:22:51.138360  801153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:22:51.148575  801153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:22:51.148672  801153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:22:51.158405  801153 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:22:51.205335  801153 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:22:51.205451  801153 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:22:51.228780  801153 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:22:51.228881  801153 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 10:22:51.228926  801153 kubeadm.go:319] OS: Linux
	I1101 10:22:51.228975  801153 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:22:51.229030  801153 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:22:51.229087  801153 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:22:51.229156  801153 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:22:51.229225  801153 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:22:51.229295  801153 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:22:51.229367  801153 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:22:51.229444  801153 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 10:22:51.305667  801153 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:22:51.305830  801153 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:22:51.305984  801153 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:22:51.315368  801153 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1101 10:22:48.606535  793145 node_ready.go:57] node "kindnet-456743" has "Ready":"False" status (will retry)
	I1101 10:22:51.108106  793145 node_ready.go:49] node "kindnet-456743" is "Ready"
	I1101 10:22:51.108149  793145 node_ready.go:38] duration metric: took 11.005055521s for node "kindnet-456743" to be "Ready" ...
	I1101 10:22:51.108167  793145 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:22:51.108222  793145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:22:51.123311  793145 api_server.go:72] duration metric: took 11.446702053s to wait for apiserver process to appear ...
	I1101 10:22:51.123339  793145 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:22:51.123363  793145 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:22:51.128711  793145 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 10:22:51.129917  793145 api_server.go:141] control plane version: v1.34.1
	I1101 10:22:51.129955  793145 api_server.go:131] duration metric: took 6.606268ms to wait for apiserver health ...
	I1101 10:22:51.129968  793145 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:22:51.133640  793145 system_pods.go:59] 8 kube-system pods found
	I1101 10:22:51.133688  793145 system_pods.go:61] "coredns-66bc5c9577-hfck8" [bb6e02d9-855e-4b3c-876a-b0f31452f63d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:22:51.133697  793145 system_pods.go:61] "etcd-kindnet-456743" [7bf3f7a0-663a-4ed6-bc98-ca0fca7bf135] Running
	I1101 10:22:51.133706  793145 system_pods.go:61] "kindnet-xnxjl" [0944d95b-edc1-40ba-af41-8197fa822359] Running
	I1101 10:22:51.133712  793145 system_pods.go:61] "kube-apiserver-kindnet-456743" [d29df756-729a-4f36-9631-dc3d4bb6d27e] Running
	I1101 10:22:51.133717  793145 system_pods.go:61] "kube-controller-manager-kindnet-456743" [7c8ad070-7fd8-4e4a-8714-60190a71324f] Running
	I1101 10:22:51.133723  793145 system_pods.go:61] "kube-proxy-vqxg4" [0b0846f6-61ee-4ca7-9618-bb31448778aa] Running
	I1101 10:22:51.133728  793145 system_pods.go:61] "kube-scheduler-kindnet-456743" [c8f59798-aad0-4bc3-a524-edfcfd89e0d8] Running
	I1101 10:22:51.133735  793145 system_pods.go:61] "storage-provisioner" [324e7159-d2ae-455f-8ff7-b3ffbf64d668] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:22:51.133761  793145 system_pods.go:74] duration metric: took 3.784985ms to wait for pod list to return data ...
	I1101 10:22:51.133775  793145 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:22:51.137357  793145 default_sa.go:45] found service account: "default"
	I1101 10:22:51.137391  793145 default_sa.go:55] duration metric: took 3.607499ms for default service account to be created ...
	I1101 10:22:51.137405  793145 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:22:51.140936  793145 system_pods.go:86] 8 kube-system pods found
	I1101 10:22:51.140979  793145 system_pods.go:89] "coredns-66bc5c9577-hfck8" [bb6e02d9-855e-4b3c-876a-b0f31452f63d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:22:51.141000  793145 system_pods.go:89] "etcd-kindnet-456743" [7bf3f7a0-663a-4ed6-bc98-ca0fca7bf135] Running
	I1101 10:22:51.141023  793145 system_pods.go:89] "kindnet-xnxjl" [0944d95b-edc1-40ba-af41-8197fa822359] Running
	I1101 10:22:51.141034  793145 system_pods.go:89] "kube-apiserver-kindnet-456743" [d29df756-729a-4f36-9631-dc3d4bb6d27e] Running
	I1101 10:22:51.141038  793145 system_pods.go:89] "kube-controller-manager-kindnet-456743" [7c8ad070-7fd8-4e4a-8714-60190a71324f] Running
	I1101 10:22:51.141045  793145 system_pods.go:89] "kube-proxy-vqxg4" [0b0846f6-61ee-4ca7-9618-bb31448778aa] Running
	I1101 10:22:51.141055  793145 system_pods.go:89] "kube-scheduler-kindnet-456743" [c8f59798-aad0-4bc3-a524-edfcfd89e0d8] Running
	I1101 10:22:51.141062  793145 system_pods.go:89] "storage-provisioner" [324e7159-d2ae-455f-8ff7-b3ffbf64d668] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:22:51.141092  793145 retry.go:31] will retry after 282.590259ms: missing components: kube-dns
	I1101 10:22:51.430942  793145 system_pods.go:86] 8 kube-system pods found
	I1101 10:22:51.430987  793145 system_pods.go:89] "coredns-66bc5c9577-hfck8" [bb6e02d9-855e-4b3c-876a-b0f31452f63d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:22:51.430996  793145 system_pods.go:89] "etcd-kindnet-456743" [7bf3f7a0-663a-4ed6-bc98-ca0fca7bf135] Running
	I1101 10:22:51.431004  793145 system_pods.go:89] "kindnet-xnxjl" [0944d95b-edc1-40ba-af41-8197fa822359] Running
	I1101 10:22:51.431009  793145 system_pods.go:89] "kube-apiserver-kindnet-456743" [d29df756-729a-4f36-9631-dc3d4bb6d27e] Running
	I1101 10:22:51.431015  793145 system_pods.go:89] "kube-controller-manager-kindnet-456743" [7c8ad070-7fd8-4e4a-8714-60190a71324f] Running
	I1101 10:22:51.431031  793145 system_pods.go:89] "kube-proxy-vqxg4" [0b0846f6-61ee-4ca7-9618-bb31448778aa] Running
	I1101 10:22:51.431035  793145 system_pods.go:89] "kube-scheduler-kindnet-456743" [c8f59798-aad0-4bc3-a524-edfcfd89e0d8] Running
	I1101 10:22:51.431042  793145 system_pods.go:89] "storage-provisioner" [324e7159-d2ae-455f-8ff7-b3ffbf64d668] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:22:51.431063  793145 retry.go:31] will retry after 346.269844ms: missing components: kube-dns
	I1101 10:22:51.781622  793145 system_pods.go:86] 8 kube-system pods found
	I1101 10:22:51.781655  793145 system_pods.go:89] "coredns-66bc5c9577-hfck8" [bb6e02d9-855e-4b3c-876a-b0f31452f63d] Running
	I1101 10:22:51.781661  793145 system_pods.go:89] "etcd-kindnet-456743" [7bf3f7a0-663a-4ed6-bc98-ca0fca7bf135] Running
	I1101 10:22:51.781664  793145 system_pods.go:89] "kindnet-xnxjl" [0944d95b-edc1-40ba-af41-8197fa822359] Running
	I1101 10:22:51.781669  793145 system_pods.go:89] "kube-apiserver-kindnet-456743" [d29df756-729a-4f36-9631-dc3d4bb6d27e] Running
	I1101 10:22:51.781672  793145 system_pods.go:89] "kube-controller-manager-kindnet-456743" [7c8ad070-7fd8-4e4a-8714-60190a71324f] Running
	I1101 10:22:51.781676  793145 system_pods.go:89] "kube-proxy-vqxg4" [0b0846f6-61ee-4ca7-9618-bb31448778aa] Running
	I1101 10:22:51.781679  793145 system_pods.go:89] "kube-scheduler-kindnet-456743" [c8f59798-aad0-4bc3-a524-edfcfd89e0d8] Running
	I1101 10:22:51.781682  793145 system_pods.go:89] "storage-provisioner" [324e7159-d2ae-455f-8ff7-b3ffbf64d668] Running
	I1101 10:22:51.781692  793145 system_pods.go:126] duration metric: took 644.279491ms to wait for k8s-apps to be running ...
	I1101 10:22:51.781698  793145 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:22:51.781749  793145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:22:51.797335  793145 system_svc.go:56] duration metric: took 15.620993ms WaitForService to wait for kubelet
	I1101 10:22:51.797374  793145 kubeadm.go:587] duration metric: took 12.120770159s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:22:51.797402  793145 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:22:51.801143  793145 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:22:51.801172  793145 node_conditions.go:123] node cpu capacity is 8
	I1101 10:22:51.801186  793145 node_conditions.go:105] duration metric: took 3.77935ms to run NodePressure ...
	I1101 10:22:51.801199  793145 start.go:242] waiting for startup goroutines ...
	I1101 10:22:51.801206  793145 start.go:247] waiting for cluster config update ...
	I1101 10:22:51.801217  793145 start.go:256] writing updated cluster config ...
	I1101 10:22:51.801489  793145 ssh_runner.go:195] Run: rm -f paused
	I1101 10:22:51.806495  793145 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:22:51.811298  793145 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hfck8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:51.816297  793145 pod_ready.go:94] pod "coredns-66bc5c9577-hfck8" is "Ready"
	I1101 10:22:51.816336  793145 pod_ready.go:86] duration metric: took 5.011512ms for pod "coredns-66bc5c9577-hfck8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:51.818762  793145 pod_ready.go:83] waiting for pod "etcd-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:51.823460  793145 pod_ready.go:94] pod "etcd-kindnet-456743" is "Ready"
	I1101 10:22:51.823489  793145 pod_ready.go:86] duration metric: took 4.694399ms for pod "etcd-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:51.825831  793145 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:51.830102  793145 pod_ready.go:94] pod "kube-apiserver-kindnet-456743" is "Ready"
	I1101 10:22:51.830129  793145 pod_ready.go:86] duration metric: took 4.260597ms for pod "kube-apiserver-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:51.832503  793145 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:52.211970  793145 pod_ready.go:94] pod "kube-controller-manager-kindnet-456743" is "Ready"
	I1101 10:22:52.212001  793145 pod_ready.go:86] duration metric: took 379.473155ms for pod "kube-controller-manager-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:52.411929  793145 pod_ready.go:83] waiting for pod "kube-proxy-vqxg4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:52.811722  793145 pod_ready.go:94] pod "kube-proxy-vqxg4" is "Ready"
	I1101 10:22:52.811753  793145 pod_ready.go:86] duration metric: took 399.797989ms for pod "kube-proxy-vqxg4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:53.012285  793145 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:53.411581  793145 pod_ready.go:94] pod "kube-scheduler-kindnet-456743" is "Ready"
	I1101 10:22:53.411613  793145 pod_ready.go:86] duration metric: took 399.29963ms for pod "kube-scheduler-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:53.411626  793145 pod_ready.go:40] duration metric: took 1.605087631s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:22:53.468335  793145 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:22:53.496593  793145 out.go:179] * Done! kubectl is now configured to use "kindnet-456743" cluster and "default" namespace by default
	I1101 10:22:51.318052  801153 out.go:252]   - Generating certificates and keys ...
	I1101 10:22:51.318176  801153 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:22:51.318283  801153 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:22:51.728492  801153 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:22:52.056716  801153 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:22:52.294076  801153 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:22:52.444149  801153 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:22:52.808601  801153 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:22:52.808789  801153 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-456743 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:22:52.987332  801153 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:22:52.987700  801153 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-456743 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:22:53.108578  801153 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:22:53.498286  801153 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:22:53.856542  801153 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:22:53.856659  801153 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:22:54.056209  801153 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:22:54.340500  801153 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:22:54.474563  801153 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:22:54.686908  801153 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:22:54.838119  801153 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:22:54.838778  801153 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:22:54.846892  801153 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:22:50.388820  805154 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:22:50.389083  805154 start.go:159] libmachine.API.Create for "custom-flannel-456743" (driver="docker")
	I1101 10:22:50.389120  805154 client.go:173] LocalClient.Create starting
	I1101 10:22:50.389184  805154 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem
	I1101 10:22:50.389223  805154 main.go:143] libmachine: Decoding PEM data...
	I1101 10:22:50.389235  805154 main.go:143] libmachine: Parsing certificate...
	I1101 10:22:50.389302  805154 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem
	I1101 10:22:50.389324  805154 main.go:143] libmachine: Decoding PEM data...
	I1101 10:22:50.389335  805154 main.go:143] libmachine: Parsing certificate...
	I1101 10:22:50.389667  805154 cli_runner.go:164] Run: docker network inspect custom-flannel-456743 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:22:50.407866  805154 cli_runner.go:211] docker network inspect custom-flannel-456743 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:22:50.407976  805154 network_create.go:284] running [docker network inspect custom-flannel-456743] to gather additional debugging logs...
	I1101 10:22:50.408002  805154 cli_runner.go:164] Run: docker network inspect custom-flannel-456743
	W1101 10:22:50.426015  805154 cli_runner.go:211] docker network inspect custom-flannel-456743 returned with exit code 1
	I1101 10:22:50.426066  805154 network_create.go:287] error running [docker network inspect custom-flannel-456743]: docker network inspect custom-flannel-456743: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-456743 not found
	I1101 10:22:50.426083  805154 network_create.go:289] output of [docker network inspect custom-flannel-456743]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-456743 not found
	
	** /stderr **
	I1101 10:22:50.426199  805154 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:22:50.444799  805154 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-db3052bfa0e7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:6a:af:78:80:46} reservation:<nil>}
	I1101 10:22:50.445545  805154 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-99d2741e1e59 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:99:ce:91:38:1c} reservation:<nil>}
	I1101 10:22:50.446275  805154 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a696a61d1319 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:f0:66:2c:aa:f2} reservation:<nil>}
	I1101 10:22:50.446931  805154 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0fdd894de01b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:09:d4:bc:cb:f6} reservation:<nil>}
	I1101 10:22:50.447724  805154 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f195e0}
	I1101 10:22:50.447753  805154 network_create.go:124] attempt to create docker network custom-flannel-456743 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 10:22:50.447811  805154 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-456743 custom-flannel-456743
	I1101 10:22:50.508786  805154 network_create.go:108] docker network custom-flannel-456743 192.168.85.0/24 created
	I1101 10:22:50.508825  805154 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-456743" container
	I1101 10:22:50.508934  805154 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:22:50.527830  805154 cli_runner.go:164] Run: docker volume create custom-flannel-456743 --label name.minikube.sigs.k8s.io=custom-flannel-456743 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:22:50.548596  805154 oci.go:103] Successfully created a docker volume custom-flannel-456743
	I1101 10:22:50.548723  805154 cli_runner.go:164] Run: docker run --rm --name custom-flannel-456743-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-456743 --entrypoint /usr/bin/test -v custom-flannel-456743:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:22:50.957485  805154 oci.go:107] Successfully prepared a docker volume custom-flannel-456743
	I1101 10:22:50.957549  805154 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:22:50.957585  805154 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:22:50.957663  805154 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-456743:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 01 10:22:18 embed-certs-678014 crio[558]: time="2025-11-01T10:22:18.007032399Z" level=info msg="Started container" PID=1738 containerID=6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz/dashboard-metrics-scraper id=5213f6e5-3861-4962-a56d-ad96b7b4eab4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0f988da58be2a6372cdabee768ba194a378a5710c7cb1a12f81abac133187e2
	Nov 01 10:22:18 embed-certs-678014 crio[558]: time="2025-11-01T10:22:18.969027349Z" level=info msg="Removing container: e402119d9689ac3dac99ec561209fea4106acc4b5c5317ea72c7349fe7cc500b" id=f91f8df2-33b2-407c-86f7-8e7c7dffd2b7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:22:18 embed-certs-678014 crio[558]: time="2025-11-01T10:22:18.981489165Z" level=info msg="Removed container e402119d9689ac3dac99ec561209fea4106acc4b5c5317ea72c7349fe7cc500b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz/dashboard-metrics-scraper" id=f91f8df2-33b2-407c-86f7-8e7c7dffd2b7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.014749988Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9e807c26-7f09-44af-9a0e-a5f5e074b2f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.015924824Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9bfe896b-ca36-475e-806f-84dad8ffd885 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.017295877Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a2f5db01-2190-4c8a-800b-5cb76a516f16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.017494772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.02293917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.023242041Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bc71c9b1fd531010cbfefbac39bb401a8a22b1525bbcf92e8279fb08e01cf533/merged/etc/passwd: no such file or directory"
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.023275059Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bc71c9b1fd531010cbfefbac39bb401a8a22b1525bbcf92e8279fb08e01cf533/merged/etc/group: no such file or directory"
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.02363096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.052238299Z" level=info msg="Created container 5be0079577779c724e1f3452cf44867d403ed275e921781e8467e360c995dfed: kube-system/storage-provisioner/storage-provisioner" id=a2f5db01-2190-4c8a-800b-5cb76a516f16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.053084598Z" level=info msg="Starting container: 5be0079577779c724e1f3452cf44867d403ed275e921781e8467e360c995dfed" id=00962976-fe3c-49ee-80e9-f13cd61b7f58 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.05555872Z" level=info msg="Started container" PID=1753 containerID=5be0079577779c724e1f3452cf44867d403ed275e921781e8467e360c995dfed description=kube-system/storage-provisioner/storage-provisioner id=00962976-fe3c-49ee-80e9-f13cd61b7f58 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5922f00fb3cd66a2fa3684e0dbd57130a3056f4a5a150f7e26a26b1628c4aaf8
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.853414193Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=12dcc5b2-e4ce-4d73-99d5-e14e0d25197d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.85482214Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f8329794-1436-4117-8768-32840f234a1a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.856237326Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz/dashboard-metrics-scraper" id=b2a17f55-4dc6-4ae5-91e8-83eb2355c1af name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.856414284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.864143412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.864616854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.903619124Z" level=info msg="Created container 9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz/dashboard-metrics-scraper" id=b2a17f55-4dc6-4ae5-91e8-83eb2355c1af name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.904390832Z" level=info msg="Starting container: 9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629" id=71aa64c0-0be4-46ba-bd66-bd13f5d3cf88 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.906680252Z" level=info msg="Started container" PID=1767 containerID=9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz/dashboard-metrics-scraper id=71aa64c0-0be4-46ba-bd66-bd13f5d3cf88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0f988da58be2a6372cdabee768ba194a378a5710c7cb1a12f81abac133187e2
	Nov 01 10:22:38 embed-certs-678014 crio[558]: time="2025-11-01T10:22:38.030167653Z" level=info msg="Removing container: 6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8" id=6c9dbf8e-1609-402a-99ca-31d4e840fb4f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:22:38 embed-certs-678014 crio[558]: time="2025-11-01T10:22:38.040895846Z" level=info msg="Removed container 6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz/dashboard-metrics-scraper" id=6c9dbf8e-1609-402a-99ca-31d4e840fb4f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9ea3b3518d7a6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   c0f988da58be2       dashboard-metrics-scraper-6ffb444bf9-k2wlz   kubernetes-dashboard
	5be0079577779       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   5922f00fb3cd6       storage-provisioner                          kube-system
	6ff08c3f98900       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   a634e7898d58c       kubernetes-dashboard-855c9754f9-cpmxg        kubernetes-dashboard
	dfa54d1337f69       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   c3d95bd3ca2e3       busybox                                      default
	e9c8510360460       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   1fbb0af0101d9       coredns-66bc5c9577-vlf7q                     kube-system
	090328e2d66c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   5922f00fb3cd6       storage-provisioner                          kube-system
	7b3d50aff9126       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   ad69d5c3b49aa       kindnet-fzb8b                                kube-system
	901ec54f9139c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   c998c19487d95       kube-proxy-tlw2d                             kube-system
	77c8dcd2cdbb1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   d677d7d29066d       kube-apiserver-embed-certs-678014            kube-system
	9882b066954b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   34a2a3c62e39a       etcd-embed-certs-678014                      kube-system
	bb7743b9e3f29       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   faa64f5c421eb       kube-controller-manager-embed-certs-678014   kube-system
	a4e56bd25efad       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   c7882dec93ecc       kube-scheduler-embed-certs-678014            kube-system
	
	
	==> coredns [e9c85103604609c36cfb00de71bfe70f095051d470ae83fe1db5422a8554bc65] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49039 - 18623 "HINFO IN 89422442011442453.521556987964252447. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.035445654s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-678014
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-678014
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=embed-certs-678014
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_20_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:20:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-678014
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:22:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:22:44 +0000   Sat, 01 Nov 2025 10:20:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:22:44 +0000   Sat, 01 Nov 2025 10:20:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:22:44 +0000   Sat, 01 Nov 2025 10:20:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:22:44 +0000   Sat, 01 Nov 2025 10:21:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-678014
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                03d8f849-7655-423d-8ed7-89c54dfab59c
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-vlf7q                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m17s
	  kube-system                 etcd-embed-certs-678014                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m23s
	  kube-system                 kindnet-fzb8b                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-embed-certs-678014             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-embed-certs-678014    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-tlw2d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-embed-certs-678014             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-k2wlz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cpmxg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m15s              kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m23s              kubelet          Node embed-certs-678014 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s              kubelet          Node embed-certs-678014 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s              kubelet          Node embed-certs-678014 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m23s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m18s              node-controller  Node embed-certs-678014 event: Registered Node embed-certs-678014 in Controller
	  Normal  NodeReady                96s                kubelet          Node embed-certs-678014 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node embed-certs-678014 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node embed-certs-678014 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node embed-certs-678014 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node embed-certs-678014 event: Registered Node embed-certs-678014 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [9882b066954b83924bdc61795f906efe75b16f0dcdb7b9d8bce879789c8743e3] <==
	{"level":"warn","ts":"2025-11-01T10:22:03.014650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.021925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.031155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.039688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.047787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.064783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.072727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.079639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:13.812433Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.623561ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765876372471723 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/etcd-embed-certs-678014\" mod_revision:583 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-embed-certs-678014\" value_size:5862 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-embed-certs-678014\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:22:13.812671Z","caller":"traceutil/trace.go:172","msg":"trace[549526070] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"239.194657ms","start":"2025-11-01T10:22:13.573432Z","end":"2025-11-01T10:22:13.812626Z","steps":["trace[549526070] 'process raft request'  (duration: 40.744018ms)","trace[549526070] 'compare'  (duration: 197.509449ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:22:14.024008Z","caller":"traceutil/trace.go:172","msg":"trace[1977700606] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"130.813137ms","start":"2025-11-01T10:22:13.893165Z","end":"2025-11-01T10:22:14.023978Z","steps":["trace[1977700606] 'process raft request'  (duration: 40.080541ms)","trace[1977700606] 'compare'  (duration: 90.529129ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:22:14.353006Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"211.343155ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765876372471727 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-678014\" mod_revision:461 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-678014\" value_size:501 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-678014\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:22:14.353121Z","caller":"traceutil/trace.go:172","msg":"trace[249256731] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"292.177346ms","start":"2025-11-01T10:22:14.060930Z","end":"2025-11-01T10:22:14.353107Z","steps":["trace[249256731] 'process raft request'  (duration: 80.651249ms)","trace[249256731] 'compare'  (duration: 211.160219ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:22:14.851418Z","caller":"traceutil/trace.go:172","msg":"trace[2064654972] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"152.121107ms","start":"2025-11-01T10:22:14.699278Z","end":"2025-11-01T10:22:14.851399Z","steps":["trace[2064654972] 'process raft request'  (duration: 151.994072ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:22:15.036198Z","caller":"traceutil/trace.go:172","msg":"trace[1783793252] linearizableReadLoop","detail":"{readStateIndex:624; appliedIndex:624; }","duration":"117.299545ms","start":"2025-11-01T10:22:14.918872Z","end":"2025-11-01T10:22:15.036171Z","steps":["trace[1783793252] 'read index received'  (duration: 117.291682ms)","trace[1783793252] 'applied index is now lower than readState.Index'  (duration: 6.638µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:22:15.036416Z","caller":"traceutil/trace.go:172","msg":"trace[1533323963] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"175.995307ms","start":"2025-11-01T10:22:14.860406Z","end":"2025-11-01T10:22:15.036401Z","steps":["trace[1533323963] 'process raft request'  (duration: 175.842004ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:22:15.036443Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.536223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.94.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-11-01T10:22:15.036491Z","caller":"traceutil/trace.go:172","msg":"trace[975312385] range","detail":"{range_begin:/registry/masterleases/192.168.94.2; range_end:; response_count:1; response_revision:587; }","duration":"117.60814ms","start":"2025-11-01T10:22:14.918863Z","end":"2025-11-01T10:22:15.036471Z","steps":["trace[975312385] 'agreement among raft nodes before linearized reading'  (duration: 117.434235ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:22:44.679084Z","caller":"traceutil/trace.go:172","msg":"trace[822222031] transaction","detail":"{read_only:false; response_revision:642; number_of_response:1; }","duration":"125.117875ms","start":"2025-11-01T10:22:44.553948Z","end":"2025-11-01T10:22:44.679066Z","steps":["trace[822222031] 'process raft request'  (duration: 124.983051ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:22:44.929152Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.064347ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:22:44.929220Z","caller":"traceutil/trace.go:172","msg":"trace[701815087] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:642; }","duration":"155.145243ms","start":"2025-11-01T10:22:44.774060Z","end":"2025-11-01T10:22:44.929206Z","steps":["trace[701815087] 'agreement among raft nodes before linearized reading'  (duration: 66.835659ms)","trace[701815087] 'range keys from in-memory index tree'  (duration: 88.190996ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:22:44.929347Z","caller":"traceutil/trace.go:172","msg":"trace[947219053] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"209.157062ms","start":"2025-11-01T10:22:44.720169Z","end":"2025-11-01T10:22:44.929326Z","steps":["trace[947219053] 'process raft request'  (duration: 120.734261ms)","trace[947219053] 'compare'  (duration: 88.212016ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:22:44.929708Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.585841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:22:44.929776Z","caller":"traceutil/trace.go:172","msg":"trace[1277428426] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:643; }","duration":"101.957795ms","start":"2025-11-01T10:22:44.827806Z","end":"2025-11-01T10:22:44.929764Z","steps":["trace[1277428426] 'agreement among raft nodes before linearized reading'  (duration: 101.564316ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:22:54.756238Z","caller":"traceutil/trace.go:172","msg":"trace[801369266] transaction","detail":"{read_only:false; response_revision:650; number_of_response:1; }","duration":"264.117025ms","start":"2025-11-01T10:22:54.492101Z","end":"2025-11-01T10:22:54.756218Z","steps":["trace[801369266] 'process raft request'  (duration: 263.991977ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:22:57 up  3:05,  0 user,  load average: 4.69, 3.93, 3.06
	Linux embed-certs-678014 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7b3d50aff91266580f138509e805b375d6b764cfe7138fdc0bb1b3780d21f7e0] <==
	I1101 10:22:04.509110       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:22:04.509446       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 10:22:04.509625       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:22:04.509685       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:22:04.509721       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:22:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:22:04.709762       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:22:04.709783       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:22:04.709794       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:22:04.709920       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:22:05.110153       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:22:05.110180       1 metrics.go:72] Registering metrics
	I1101 10:22:05.207963       1 controller.go:711] "Syncing nftables rules"
	I1101 10:22:14.709771       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:22:14.709827       1 main.go:301] handling current node
	I1101 10:22:24.709457       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:22:24.709518       1 main.go:301] handling current node
	I1101 10:22:34.709604       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:22:34.709638       1 main.go:301] handling current node
	I1101 10:22:44.709890       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:22:44.709929       1 main.go:301] handling current node
	I1101 10:22:54.709189       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:22:54.709243       1 main.go:301] handling current node
	
	
	==> kube-apiserver [77c8dcd2cdbb15ad48c01e45cd25792e208735c6eda9f44bc1fa9ab853e0081c] <==
	I1101 10:22:03.714114       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:22:03.714404       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:22:03.719099       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:22:03.721217       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:22:03.721285       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:22:03.721296       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:22:03.721304       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:22:03.721310       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:22:03.737697       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:22:03.780405       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:22:03.789387       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:22:03.789437       1 policy_source.go:240] refreshing policies
	I1101 10:22:03.792628       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:22:03.955945       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:22:04.094902       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:22:04.149800       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:22:04.197329       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:22:04.217558       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:22:04.299963       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.160.202"}
	I1101 10:22:04.314150       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.12.253"}
	I1101 10:22:04.608396       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:22:07.065245       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:22:07.462771       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:22:07.462771       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:22:07.662984       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bb7743b9e3f295728cb34054b001eac220d6549f08d9f5e304789213cc644bae] <==
	I1101 10:22:07.043214       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:22:07.058678       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:22:07.058699       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:22:07.058762       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:22:07.058783       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:22:07.058883       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:22:07.058916       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:22:07.059102       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:22:07.059211       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-678014"
	I1101 10:22:07.059246       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:22:07.059256       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:22:07.059265       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:22:07.059257       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:22:07.060401       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:22:07.060441       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:22:07.062745       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:22:07.064817       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:22:07.066110       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:22:07.067199       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:22:07.068972       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:22:07.071897       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:22:07.077579       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:22:07.077606       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:22:07.077620       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:22:07.082943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [901ec54f9139c34f1066587c7237ab3984a2c279347d55a1d0b038574bbca217] <==
	I1101 10:22:04.297309       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:22:04.366040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:22:04.466788       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:22:04.466825       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 10:22:04.466929       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:22:04.485729       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:22:04.485790       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:22:04.491329       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:22:04.491747       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:22:04.491775       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:22:04.493409       1 config.go:200] "Starting service config controller"
	I1101 10:22:04.493443       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:22:04.493482       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:22:04.493488       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:22:04.493501       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:22:04.493526       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:22:04.493605       1 config.go:309] "Starting node config controller"
	I1101 10:22:04.493667       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:22:04.493677       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:22:04.593632       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:22:04.593652       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:22:04.593657       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a4e56bd25efad002d1eb660d328f3fda9e93ba58bb33f2e388635b902755f1e9] <==
	I1101 10:22:02.124868       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:22:03.643970       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:22:03.644040       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:22:03.644057       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:22:03.644067       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:22:03.682662       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:22:03.682698       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:22:03.689887       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:22:03.689935       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:22:03.691115       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:22:03.691198       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 10:22:03.694784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 10:22:03.696643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1101 10:22:04.990668       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:22:07 embed-certs-678014 kubelet[718]: I1101 10:22:07.673348     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcvqw\" (UniqueName: \"kubernetes.io/projected/e5f61d2c-8a94-4c09-a691-33ae048466f1-kube-api-access-xcvqw\") pod \"dashboard-metrics-scraper-6ffb444bf9-k2wlz\" (UID: \"e5f61d2c-8a94-4c09-a691-33ae048466f1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz"
	Nov 01 10:22:07 embed-certs-678014 kubelet[718]: I1101 10:22:07.673400     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsdwn\" (UniqueName: \"kubernetes.io/projected/6d549260-f10c-4681-8da0-9ae59df674d3-kube-api-access-bsdwn\") pod \"kubernetes-dashboard-855c9754f9-cpmxg\" (UID: \"6d549260-f10c-4681-8da0-9ae59df674d3\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cpmxg"
	Nov 01 10:22:07 embed-certs-678014 kubelet[718]: I1101 10:22:07.673437     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6d549260-f10c-4681-8da0-9ae59df674d3-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-cpmxg\" (UID: \"6d549260-f10c-4681-8da0-9ae59df674d3\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cpmxg"
	Nov 01 10:22:07 embed-certs-678014 kubelet[718]: I1101 10:22:07.673514     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e5f61d2c-8a94-4c09-a691-33ae048466f1-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-k2wlz\" (UID: \"e5f61d2c-8a94-4c09-a691-33ae048466f1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz"
	Nov 01 10:22:09 embed-certs-678014 kubelet[718]: I1101 10:22:09.639985     718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:22:15 embed-certs-678014 kubelet[718]: I1101 10:22:15.975867     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cpmxg" podStartSLOduration=2.198900274 podStartE2EDuration="8.975828652s" podCreationTimestamp="2025-11-01 10:22:07 +0000 UTC" firstStartedPulling="2025-11-01 10:22:07.919638884 +0000 UTC m=+7.168674927" lastFinishedPulling="2025-11-01 10:22:14.696567258 +0000 UTC m=+13.945603305" observedRunningTime="2025-11-01 10:22:15.974731298 +0000 UTC m=+15.223767362" watchObservedRunningTime="2025-11-01 10:22:15.975828652 +0000 UTC m=+15.224864718"
	Nov 01 10:22:17 embed-certs-678014 kubelet[718]: I1101 10:22:17.961960     718 scope.go:117] "RemoveContainer" containerID="e402119d9689ac3dac99ec561209fea4106acc4b5c5317ea72c7349fe7cc500b"
	Nov 01 10:22:18 embed-certs-678014 kubelet[718]: I1101 10:22:18.967344     718 scope.go:117] "RemoveContainer" containerID="e402119d9689ac3dac99ec561209fea4106acc4b5c5317ea72c7349fe7cc500b"
	Nov 01 10:22:18 embed-certs-678014 kubelet[718]: I1101 10:22:18.967549     718 scope.go:117] "RemoveContainer" containerID="6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8"
	Nov 01 10:22:18 embed-certs-678014 kubelet[718]: E1101 10:22:18.967767     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2wlz_kubernetes-dashboard(e5f61d2c-8a94-4c09-a691-33ae048466f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz" podUID="e5f61d2c-8a94-4c09-a691-33ae048466f1"
	Nov 01 10:22:19 embed-certs-678014 kubelet[718]: I1101 10:22:19.972075     718 scope.go:117] "RemoveContainer" containerID="6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8"
	Nov 01 10:22:19 embed-certs-678014 kubelet[718]: E1101 10:22:19.972236     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2wlz_kubernetes-dashboard(e5f61d2c-8a94-4c09-a691-33ae048466f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz" podUID="e5f61d2c-8a94-4c09-a691-33ae048466f1"
	Nov 01 10:22:26 embed-certs-678014 kubelet[718]: I1101 10:22:26.623774     718 scope.go:117] "RemoveContainer" containerID="6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8"
	Nov 01 10:22:26 embed-certs-678014 kubelet[718]: E1101 10:22:26.624064     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2wlz_kubernetes-dashboard(e5f61d2c-8a94-4c09-a691-33ae048466f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz" podUID="e5f61d2c-8a94-4c09-a691-33ae048466f1"
	Nov 01 10:22:35 embed-certs-678014 kubelet[718]: I1101 10:22:35.014295     718 scope.go:117] "RemoveContainer" containerID="090328e2d66c9eab8a50d6179bde736e4e3c793c38917b3d82a09df65c4b1ee2"
	Nov 01 10:22:37 embed-certs-678014 kubelet[718]: I1101 10:22:37.852428     718 scope.go:117] "RemoveContainer" containerID="6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8"
	Nov 01 10:22:38 embed-certs-678014 kubelet[718]: I1101 10:22:38.028701     718 scope.go:117] "RemoveContainer" containerID="6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8"
	Nov 01 10:22:38 embed-certs-678014 kubelet[718]: I1101 10:22:38.028928     718 scope.go:117] "RemoveContainer" containerID="9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629"
	Nov 01 10:22:38 embed-certs-678014 kubelet[718]: E1101 10:22:38.029148     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2wlz_kubernetes-dashboard(e5f61d2c-8a94-4c09-a691-33ae048466f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz" podUID="e5f61d2c-8a94-4c09-a691-33ae048466f1"
	Nov 01 10:22:46 embed-certs-678014 kubelet[718]: I1101 10:22:46.623370     718 scope.go:117] "RemoveContainer" containerID="9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629"
	Nov 01 10:22:46 embed-certs-678014 kubelet[718]: E1101 10:22:46.623620     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2wlz_kubernetes-dashboard(e5f61d2c-8a94-4c09-a691-33ae048466f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz" podUID="e5f61d2c-8a94-4c09-a691-33ae048466f1"
	Nov 01 10:22:53 embed-certs-678014 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:22:53 embed-certs-678014 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:22:53 embed-certs-678014 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:22:53 embed-certs-678014 systemd[1]: kubelet.service: Consumed 1.841s CPU time.
	
	
	==> kubernetes-dashboard [6ff08c3f9890015052d9adbb802e29cfd38776e9a69671ca1aacbe3ea7955d0a] <==
	2025/11/01 10:22:15 Starting overwatch
	2025/11/01 10:22:15 Using namespace: kubernetes-dashboard
	2025/11/01 10:22:15 Using in-cluster config to connect to apiserver
	2025/11/01 10:22:15 Using secret token for csrf signing
	2025/11/01 10:22:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:22:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:22:15 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:22:15 Generating JWE encryption key
	2025/11/01 10:22:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:22:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:22:15 Initializing JWE encryption key from synchronized object
	2025/11/01 10:22:15 Creating in-cluster Sidecar client
	2025/11/01 10:22:15 Serving insecurely on HTTP port: 9090
	2025/11/01 10:22:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:22:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [090328e2d66c9eab8a50d6179bde736e4e3c793c38917b3d82a09df65c4b1ee2] <==
	I1101 10:22:04.261751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:22:34.269278       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5be0079577779c724e1f3452cf44867d403ed275e921781e8467e360c995dfed] <==
	I1101 10:22:35.069248       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:22:35.078416       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:22:35.078478       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:22:35.081230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:38.536742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:42.797580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:46.396387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:49.450431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:52.472645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:52.478162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:22:52.478354       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:22:52.478431       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a8bf23cc-2536-4ed5-ae0e-07000c30e5da", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-678014_0e4427db-d3c3-4189-9760-17e140666d54 became leader
	I1101 10:22:52.478520       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-678014_0e4427db-d3c3-4189-9760-17e140666d54!
	W1101 10:22:52.480808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:52.485211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:22:52.579573       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-678014_0e4427db-d3c3-4189-9760-17e140666d54!
	W1101 10:22:54.489082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:54.757374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:56.762960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:56.770970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-678014 -n embed-certs-678014
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-678014 -n embed-certs-678014: exit status 2 (392.902218ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-678014 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-678014
helpers_test.go:243: (dbg) docker inspect embed-certs-678014:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8",
	        "Created": "2025-11-01T10:20:19.10525333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 788972,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:21:53.744994685Z",
	            "FinishedAt": "2025-11-01T10:21:52.426238128Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8/hosts",
	        "LogPath": "/var/lib/docker/containers/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8/7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8-json.log",
	        "Name": "/embed-certs-678014",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-678014:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-678014",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7254f01179dafb8322614a82be58e2957cd9de66ff411933469e1b25bd59d7b8",
	                "LowerDir": "/var/lib/docker/overlay2/fa1b4666a9401b2b8455588bf0fc7ae32d80d9a94c693ed716d98b8d8b3eeed4-init/diff:/var/lib/docker/overlay2/b8508c46c8b6b590f78d056c60b5d8b2e8edcec6934bc48bd6b4bd315b08a50c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa1b4666a9401b2b8455588bf0fc7ae32d80d9a94c693ed716d98b8d8b3eeed4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa1b4666a9401b2b8455588bf0fc7ae32d80d9a94c693ed716d98b8d8b3eeed4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa1b4666a9401b2b8455588bf0fc7ae32d80d9a94c693ed716d98b8d8b3eeed4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-678014",
	                "Source": "/var/lib/docker/volumes/embed-certs-678014/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-678014",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-678014",
	                "name.minikube.sigs.k8s.io": "embed-certs-678014",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e09db449d55eda9bc07fb94c95a156f29886cef12615c4350c91812dfcf0fc37",
	            "SandboxKey": "/var/run/docker/netns/e09db449d55e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33223"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33224"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33227"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33225"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33226"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-678014": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:8c:82:3f:e7:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "59c3492c15198878d11d0583248059a9226a90667cc7e5ff7108cce34fc74e86",
	                    "EndpointID": "d9417974da7ee44f4160f0a6771ed01e11add10998f9d7ad123fbd1b006ad337",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-678014",
	                        "7254f01179da"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-678014 -n embed-certs-678014
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-678014 -n embed-certs-678014: exit status 2 (409.120399ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-678014 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-678014 logs -n 25: (1.311865271s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-456743 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo docker system info                                                                                                                             │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cri-dockerd --version                                                                                                                          │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ ssh     │ -p auto-456743 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo containerd config dump                                                                                                                         │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ ssh     │ -p auto-456743 sudo crio config                                                                                                                                    │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ delete  │ -p auto-456743                                                                                                                                                     │ auto-456743                  │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ image   │ default-k8s-diff-port-535119 image list --format=json                                                                                                              │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ pause   │ -p default-k8s-diff-port-535119 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ start   │ -p calico-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-456743                │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-535119                                                                                                                                    │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ delete  │ -p default-k8s-diff-port-535119                                                                                                                                    │ default-k8s-diff-port-535119 │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ start   │ -p custom-flannel-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-456743        │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	│ image   │ embed-certs-678014 image list --format=json                                                                                                                        │ embed-certs-678014           │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │ 01 Nov 25 10:22 UTC │
	│ pause   │ -p embed-certs-678014 --alsologtostderr -v=1                                                                                                                       │ embed-certs-678014           │ jenkins │ v1.37.0 │ 01 Nov 25 10:22 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:22:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:22:50.184215  805154 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:22:50.184501  805154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:22:50.184511  805154 out.go:374] Setting ErrFile to fd 2...
	I1101 10:22:50.184516  805154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:22:50.184737  805154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:22:50.185266  805154 out.go:368] Setting JSON to false
	I1101 10:22:50.186505  805154 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11107,"bootTime":1761981463,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:22:50.186610  805154 start.go:143] virtualization: kvm guest
	I1101 10:22:50.188359  805154 out.go:179] * [custom-flannel-456743] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:22:50.189405  805154 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:22:50.189437  805154 notify.go:221] Checking for updates...
	I1101 10:22:50.191428  805154 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:22:50.192479  805154 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:22:50.193423  805154 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:22:50.194349  805154 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:22:50.195425  805154 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:22:50.197139  805154 config.go:182] Loaded profile config "calico-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:50.197288  805154 config.go:182] Loaded profile config "embed-certs-678014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:50.197426  805154 config.go:182] Loaded profile config "kindnet-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:22:50.197583  805154 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:22:50.224654  805154 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:22:50.224857  805154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:22:50.287554  805154 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:22:50.27496342 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:22:50.287661  805154 docker.go:319] overlay module found
	I1101 10:22:50.289262  805154 out.go:179] * Using the docker driver based on user configuration
	I1101 10:22:50.290224  805154 start.go:309] selected driver: docker
	I1101 10:22:50.290240  805154 start.go:930] validating driver "docker" against <nil>
	I1101 10:22:50.290267  805154 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:22:50.290931  805154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:22:50.357584  805154 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:22:50.345249025 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:22:50.357904  805154 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:22:50.358163  805154 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:22:50.359570  805154 out.go:179] * Using Docker driver with root privileges
	I1101 10:22:50.360566  805154 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1101 10:22:50.360597  805154 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1101 10:22:50.360678  805154 start.go:353] cluster config:
	{Name:custom-flannel-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-456743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:22:50.361891  805154 out.go:179] * Starting "custom-flannel-456743" primary control-plane node in "custom-flannel-456743" cluster
	I1101 10:22:50.362742  805154 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:22:50.363702  805154 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:22:50.364519  805154 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:22:50.364581  805154 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:22:50.364597  805154 cache.go:59] Caching tarball of preloaded images
	I1101 10:22:50.364619  805154 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:22:50.364735  805154 preload.go:233] Found /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:22:50.364752  805154 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:22:50.364936  805154 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/custom-flannel-456743/config.json ...
	I1101 10:22:50.364972  805154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/custom-flannel-456743/config.json: {Name:mk09a24e08f2b3815c861f8baa1a6832ec95b79a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.386898  805154 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:22:50.386925  805154 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:22:50.386944  805154 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:22:50.386973  805154 start.go:360] acquireMachinesLock for custom-flannel-456743: {Name:mk360259e43a462b6efc02b89ea4bcf9f3bf408f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:22:50.387087  805154 start.go:364] duration metric: took 96.375µs to acquireMachinesLock for "custom-flannel-456743"
	I1101 10:22:50.387116  805154 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-456743 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:22:50.387192  805154 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:22:49.584979  801153 cli_runner.go:164] Run: docker network inspect calico-456743 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:22:49.603718  801153 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:22:49.608673  801153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:22:49.620395  801153 kubeadm.go:884] updating cluster {Name:calico-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-456743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:22:49.620591  801153 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:22:49.620665  801153 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:22:49.660884  801153 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:22:49.660912  801153 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:22:49.660974  801153 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:22:49.691085  801153 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:22:49.691110  801153 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:22:49.691119  801153 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 10:22:49.691211  801153 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-456743 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-456743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1101 10:22:49.691282  801153 ssh_runner.go:195] Run: crio config
	I1101 10:22:49.742395  801153 cni.go:84] Creating CNI manager for "calico"
	I1101 10:22:49.742428  801153 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:22:49.742452  801153 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-456743 NodeName:calico-456743 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:22:49.742601  801153 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-456743"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:22:49.742670  801153 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:22:49.752094  801153 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:22:49.752178  801153 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:22:49.761579  801153 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 10:22:49.776508  801153 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:22:49.793411  801153 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 10:22:49.808383  801153 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:22:49.812646  801153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:22:49.824296  801153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:22:49.916760  801153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:22:49.949786  801153 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743 for IP: 192.168.76.2
	I1101 10:22:49.949827  801153 certs.go:195] generating shared ca certs ...
	I1101 10:22:49.949885  801153 certs.go:227] acquiring lock for ca certs: {Name:mk86760015e5e32f1c55d03d8b768f64dc56f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:49.950073  801153 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key
	I1101 10:22:49.950112  801153 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key
	I1101 10:22:49.950125  801153 certs.go:257] generating profile certs ...
	I1101 10:22:49.950197  801153 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/client.key
	I1101 10:22:49.950215  801153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/client.crt with IP's: []
	I1101 10:22:50.186469  801153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/client.crt ...
	I1101 10:22:50.186497  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/client.crt: {Name:mkd1d16610a1f1f6545db441b228216545c8d2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.186681  801153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/client.key ...
	I1101 10:22:50.186696  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/client.key: {Name:mk08a247a28ae504750b07e5b5e9b2fc5bb68145 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.186811  801153 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.key.1972247e
	I1101 10:22:50.186828  801153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.crt.1972247e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 10:22:50.420860  801153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.crt.1972247e ...
	I1101 10:22:50.420901  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.crt.1972247e: {Name:mk4a5406f63293a9c25bd6b43e34fa140d5ba573 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.421124  801153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.key.1972247e ...
	I1101 10:22:50.421147  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.key.1972247e: {Name:mk3ea9a0ec641cc46d0442498893701a7ff31d83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.421280  801153 certs.go:382] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.crt.1972247e -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.crt
	I1101 10:22:50.421388  801153 certs.go:386] copying /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.key.1972247e -> /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.key
	I1101 10:22:50.421481  801153 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.key
	I1101 10:22:50.421513  801153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.crt with IP's: []
	I1101 10:22:50.523352  801153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.crt ...
	I1101 10:22:50.523385  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.crt: {Name:mkdabf4a4e14609c551b4cc6cbdd216d5c522d1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.523597  801153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.key ...
	I1101 10:22:50.523630  801153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.key: {Name:mkd575364c466831f95d7cc92f8b3bd08eca5781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:22:50.523907  801153 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem (1338 bytes)
	W1101 10:22:50.523955  801153 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687_empty.pem, impossibly tiny 0 bytes
	I1101 10:22:50.523966  801153 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:22:50.524000  801153 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:22:50.524033  801153 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:22:50.524061  801153 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/certs/key.pem (1675 bytes)
	I1101 10:22:50.524120  801153 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem (1708 bytes)
	I1101 10:22:50.524784  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:22:50.546785  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:22:50.567691  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:22:50.588140  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:22:50.609335  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 10:22:50.629927  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:22:50.649813  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:22:50.670792  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/calico-456743/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:22:50.692637  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:22:50.715601  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/certs/517687.pem --> /usr/share/ca-certificates/517687.pem (1338 bytes)
	I1101 10:22:50.738553  801153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/ssl/certs/5176872.pem --> /usr/share/ca-certificates/5176872.pem (1708 bytes)
	I1101 10:22:50.761162  801153 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:22:50.776952  801153 ssh_runner.go:195] Run: openssl version
	I1101 10:22:50.784941  801153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/517687.pem && ln -fs /usr/share/ca-certificates/517687.pem /etc/ssl/certs/517687.pem"
	I1101 10:22:50.796584  801153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/517687.pem
	I1101 10:22:50.802949  801153 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:35 /usr/share/ca-certificates/517687.pem
	I1101 10:22:50.803022  801153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/517687.pem
	I1101 10:22:50.845568  801153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/517687.pem /etc/ssl/certs/51391683.0"
	I1101 10:22:50.859764  801153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5176872.pem && ln -fs /usr/share/ca-certificates/5176872.pem /etc/ssl/certs/5176872.pem"
	I1101 10:22:50.871181  801153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5176872.pem
	I1101 10:22:50.875919  801153 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:35 /usr/share/ca-certificates/5176872.pem
	I1101 10:22:50.875991  801153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5176872.pem
	I1101 10:22:50.915318  801153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5176872.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:22:50.927095  801153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:22:50.938708  801153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:22:50.944682  801153 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:22:50.944758  801153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:22:50.984340  801153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:22:50.994679  801153 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:22:50.999221  801153 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:22:50.999288  801153 kubeadm.go:401] StartCluster: {Name:calico-456743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-456743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:22:50.999391  801153 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:22:50.999464  801153 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:22:51.034162  801153 cri.go:89] found id: ""
	I1101 10:22:51.034255  801153 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:22:51.045625  801153 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:22:51.057162  801153 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:22:51.057251  801153 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:22:51.069881  801153 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:22:51.069907  801153 kubeadm.go:158] found existing configuration files:
	
	I1101 10:22:51.069971  801153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:22:51.080795  801153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:22:51.080903  801153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:22:51.092996  801153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:22:51.105808  801153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:22:51.105902  801153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:22:51.116235  801153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:22:51.127132  801153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:22:51.127216  801153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:22:51.138360  801153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:22:51.148575  801153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:22:51.148672  801153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:22:51.158405  801153 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:22:51.205335  801153 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:22:51.205451  801153 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:22:51.228780  801153 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:22:51.228881  801153 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 10:22:51.228926  801153 kubeadm.go:319] OS: Linux
	I1101 10:22:51.228975  801153 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:22:51.229030  801153 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:22:51.229087  801153 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:22:51.229156  801153 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:22:51.229225  801153 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:22:51.229295  801153 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:22:51.229367  801153 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:22:51.229444  801153 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 10:22:51.305667  801153 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:22:51.305830  801153 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:22:51.305984  801153 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:22:51.315368  801153 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1101 10:22:48.606535  793145 node_ready.go:57] node "kindnet-456743" has "Ready":"False" status (will retry)
	I1101 10:22:51.108106  793145 node_ready.go:49] node "kindnet-456743" is "Ready"
	I1101 10:22:51.108149  793145 node_ready.go:38] duration metric: took 11.005055521s for node "kindnet-456743" to be "Ready" ...
	I1101 10:22:51.108167  793145 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:22:51.108222  793145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:22:51.123311  793145 api_server.go:72] duration metric: took 11.446702053s to wait for apiserver process to appear ...
	I1101 10:22:51.123339  793145 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:22:51.123363  793145 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:22:51.128711  793145 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 10:22:51.129917  793145 api_server.go:141] control plane version: v1.34.1
	I1101 10:22:51.129955  793145 api_server.go:131] duration metric: took 6.606268ms to wait for apiserver health ...
	I1101 10:22:51.129968  793145 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:22:51.133640  793145 system_pods.go:59] 8 kube-system pods found
	I1101 10:22:51.133688  793145 system_pods.go:61] "coredns-66bc5c9577-hfck8" [bb6e02d9-855e-4b3c-876a-b0f31452f63d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:22:51.133697  793145 system_pods.go:61] "etcd-kindnet-456743" [7bf3f7a0-663a-4ed6-bc98-ca0fca7bf135] Running
	I1101 10:22:51.133706  793145 system_pods.go:61] "kindnet-xnxjl" [0944d95b-edc1-40ba-af41-8197fa822359] Running
	I1101 10:22:51.133712  793145 system_pods.go:61] "kube-apiserver-kindnet-456743" [d29df756-729a-4f36-9631-dc3d4bb6d27e] Running
	I1101 10:22:51.133717  793145 system_pods.go:61] "kube-controller-manager-kindnet-456743" [7c8ad070-7fd8-4e4a-8714-60190a71324f] Running
	I1101 10:22:51.133723  793145 system_pods.go:61] "kube-proxy-vqxg4" [0b0846f6-61ee-4ca7-9618-bb31448778aa] Running
	I1101 10:22:51.133728  793145 system_pods.go:61] "kube-scheduler-kindnet-456743" [c8f59798-aad0-4bc3-a524-edfcfd89e0d8] Running
	I1101 10:22:51.133735  793145 system_pods.go:61] "storage-provisioner" [324e7159-d2ae-455f-8ff7-b3ffbf64d668] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:22:51.133761  793145 system_pods.go:74] duration metric: took 3.784985ms to wait for pod list to return data ...
	I1101 10:22:51.133775  793145 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:22:51.137357  793145 default_sa.go:45] found service account: "default"
	I1101 10:22:51.137391  793145 default_sa.go:55] duration metric: took 3.607499ms for default service account to be created ...
	I1101 10:22:51.137405  793145 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:22:51.140936  793145 system_pods.go:86] 8 kube-system pods found
	I1101 10:22:51.140979  793145 system_pods.go:89] "coredns-66bc5c9577-hfck8" [bb6e02d9-855e-4b3c-876a-b0f31452f63d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:22:51.141000  793145 system_pods.go:89] "etcd-kindnet-456743" [7bf3f7a0-663a-4ed6-bc98-ca0fca7bf135] Running
	I1101 10:22:51.141023  793145 system_pods.go:89] "kindnet-xnxjl" [0944d95b-edc1-40ba-af41-8197fa822359] Running
	I1101 10:22:51.141034  793145 system_pods.go:89] "kube-apiserver-kindnet-456743" [d29df756-729a-4f36-9631-dc3d4bb6d27e] Running
	I1101 10:22:51.141038  793145 system_pods.go:89] "kube-controller-manager-kindnet-456743" [7c8ad070-7fd8-4e4a-8714-60190a71324f] Running
	I1101 10:22:51.141045  793145 system_pods.go:89] "kube-proxy-vqxg4" [0b0846f6-61ee-4ca7-9618-bb31448778aa] Running
	I1101 10:22:51.141055  793145 system_pods.go:89] "kube-scheduler-kindnet-456743" [c8f59798-aad0-4bc3-a524-edfcfd89e0d8] Running
	I1101 10:22:51.141062  793145 system_pods.go:89] "storage-provisioner" [324e7159-d2ae-455f-8ff7-b3ffbf64d668] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:22:51.141092  793145 retry.go:31] will retry after 282.590259ms: missing components: kube-dns
	I1101 10:22:51.430942  793145 system_pods.go:86] 8 kube-system pods found
	I1101 10:22:51.430987  793145 system_pods.go:89] "coredns-66bc5c9577-hfck8" [bb6e02d9-855e-4b3c-876a-b0f31452f63d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:22:51.430996  793145 system_pods.go:89] "etcd-kindnet-456743" [7bf3f7a0-663a-4ed6-bc98-ca0fca7bf135] Running
	I1101 10:22:51.431004  793145 system_pods.go:89] "kindnet-xnxjl" [0944d95b-edc1-40ba-af41-8197fa822359] Running
	I1101 10:22:51.431009  793145 system_pods.go:89] "kube-apiserver-kindnet-456743" [d29df756-729a-4f36-9631-dc3d4bb6d27e] Running
	I1101 10:22:51.431015  793145 system_pods.go:89] "kube-controller-manager-kindnet-456743" [7c8ad070-7fd8-4e4a-8714-60190a71324f] Running
	I1101 10:22:51.431031  793145 system_pods.go:89] "kube-proxy-vqxg4" [0b0846f6-61ee-4ca7-9618-bb31448778aa] Running
	I1101 10:22:51.431035  793145 system_pods.go:89] "kube-scheduler-kindnet-456743" [c8f59798-aad0-4bc3-a524-edfcfd89e0d8] Running
	I1101 10:22:51.431042  793145 system_pods.go:89] "storage-provisioner" [324e7159-d2ae-455f-8ff7-b3ffbf64d668] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:22:51.431063  793145 retry.go:31] will retry after 346.269844ms: missing components: kube-dns
	I1101 10:22:51.781622  793145 system_pods.go:86] 8 kube-system pods found
	I1101 10:22:51.781655  793145 system_pods.go:89] "coredns-66bc5c9577-hfck8" [bb6e02d9-855e-4b3c-876a-b0f31452f63d] Running
	I1101 10:22:51.781661  793145 system_pods.go:89] "etcd-kindnet-456743" [7bf3f7a0-663a-4ed6-bc98-ca0fca7bf135] Running
	I1101 10:22:51.781664  793145 system_pods.go:89] "kindnet-xnxjl" [0944d95b-edc1-40ba-af41-8197fa822359] Running
	I1101 10:22:51.781669  793145 system_pods.go:89] "kube-apiserver-kindnet-456743" [d29df756-729a-4f36-9631-dc3d4bb6d27e] Running
	I1101 10:22:51.781672  793145 system_pods.go:89] "kube-controller-manager-kindnet-456743" [7c8ad070-7fd8-4e4a-8714-60190a71324f] Running
	I1101 10:22:51.781676  793145 system_pods.go:89] "kube-proxy-vqxg4" [0b0846f6-61ee-4ca7-9618-bb31448778aa] Running
	I1101 10:22:51.781679  793145 system_pods.go:89] "kube-scheduler-kindnet-456743" [c8f59798-aad0-4bc3-a524-edfcfd89e0d8] Running
	I1101 10:22:51.781682  793145 system_pods.go:89] "storage-provisioner" [324e7159-d2ae-455f-8ff7-b3ffbf64d668] Running
	I1101 10:22:51.781692  793145 system_pods.go:126] duration metric: took 644.279491ms to wait for k8s-apps to be running ...
	I1101 10:22:51.781698  793145 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:22:51.781749  793145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:22:51.797335  793145 system_svc.go:56] duration metric: took 15.620993ms WaitForService to wait for kubelet
	I1101 10:22:51.797374  793145 kubeadm.go:587] duration metric: took 12.120770159s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:22:51.797402  793145 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:22:51.801143  793145 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:22:51.801172  793145 node_conditions.go:123] node cpu capacity is 8
	I1101 10:22:51.801186  793145 node_conditions.go:105] duration metric: took 3.77935ms to run NodePressure ...
	I1101 10:22:51.801199  793145 start.go:242] waiting for startup goroutines ...
	I1101 10:22:51.801206  793145 start.go:247] waiting for cluster config update ...
	I1101 10:22:51.801217  793145 start.go:256] writing updated cluster config ...
	I1101 10:22:51.801489  793145 ssh_runner.go:195] Run: rm -f paused
	I1101 10:22:51.806495  793145 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:22:51.811298  793145 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hfck8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:51.816297  793145 pod_ready.go:94] pod "coredns-66bc5c9577-hfck8" is "Ready"
	I1101 10:22:51.816336  793145 pod_ready.go:86] duration metric: took 5.011512ms for pod "coredns-66bc5c9577-hfck8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:51.818762  793145 pod_ready.go:83] waiting for pod "etcd-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:51.823460  793145 pod_ready.go:94] pod "etcd-kindnet-456743" is "Ready"
	I1101 10:22:51.823489  793145 pod_ready.go:86] duration metric: took 4.694399ms for pod "etcd-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:51.825831  793145 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:51.830102  793145 pod_ready.go:94] pod "kube-apiserver-kindnet-456743" is "Ready"
	I1101 10:22:51.830129  793145 pod_ready.go:86] duration metric: took 4.260597ms for pod "kube-apiserver-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:51.832503  793145 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:52.211970  793145 pod_ready.go:94] pod "kube-controller-manager-kindnet-456743" is "Ready"
	I1101 10:22:52.212001  793145 pod_ready.go:86] duration metric: took 379.473155ms for pod "kube-controller-manager-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:52.411929  793145 pod_ready.go:83] waiting for pod "kube-proxy-vqxg4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:52.811722  793145 pod_ready.go:94] pod "kube-proxy-vqxg4" is "Ready"
	I1101 10:22:52.811753  793145 pod_ready.go:86] duration metric: took 399.797989ms for pod "kube-proxy-vqxg4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:53.012285  793145 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:53.411581  793145 pod_ready.go:94] pod "kube-scheduler-kindnet-456743" is "Ready"
	I1101 10:22:53.411613  793145 pod_ready.go:86] duration metric: took 399.29963ms for pod "kube-scheduler-kindnet-456743" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:22:53.411626  793145 pod_ready.go:40] duration metric: took 1.605087631s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:22:53.468335  793145 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:22:53.496593  793145 out.go:179] * Done! kubectl is now configured to use "kindnet-456743" cluster and "default" namespace by default
	I1101 10:22:51.318052  801153 out.go:252]   - Generating certificates and keys ...
	I1101 10:22:51.318176  801153 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:22:51.318283  801153 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:22:51.728492  801153 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:22:52.056716  801153 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:22:52.294076  801153 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:22:52.444149  801153 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:22:52.808601  801153 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:22:52.808789  801153 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-456743 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:22:52.987332  801153 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:22:52.987700  801153 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-456743 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:22:53.108578  801153 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:22:53.498286  801153 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:22:53.856542  801153 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:22:53.856659  801153 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:22:54.056209  801153 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:22:54.340500  801153 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:22:54.474563  801153 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:22:54.686908  801153 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:22:54.838119  801153 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:22:54.838778  801153 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:22:54.846892  801153 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:22:50.388820  805154 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:22:50.389083  805154 start.go:159] libmachine.API.Create for "custom-flannel-456743" (driver="docker")
	I1101 10:22:50.389120  805154 client.go:173] LocalClient.Create starting
	I1101 10:22:50.389184  805154 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-514161/.minikube/certs/ca.pem
	I1101 10:22:50.389223  805154 main.go:143] libmachine: Decoding PEM data...
	I1101 10:22:50.389235  805154 main.go:143] libmachine: Parsing certificate...
	I1101 10:22:50.389302  805154 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-514161/.minikube/certs/cert.pem
	I1101 10:22:50.389324  805154 main.go:143] libmachine: Decoding PEM data...
	I1101 10:22:50.389335  805154 main.go:143] libmachine: Parsing certificate...
	I1101 10:22:50.389667  805154 cli_runner.go:164] Run: docker network inspect custom-flannel-456743 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:22:50.407866  805154 cli_runner.go:211] docker network inspect custom-flannel-456743 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:22:50.407976  805154 network_create.go:284] running [docker network inspect custom-flannel-456743] to gather additional debugging logs...
	I1101 10:22:50.408002  805154 cli_runner.go:164] Run: docker network inspect custom-flannel-456743
	W1101 10:22:50.426015  805154 cli_runner.go:211] docker network inspect custom-flannel-456743 returned with exit code 1
	I1101 10:22:50.426066  805154 network_create.go:287] error running [docker network inspect custom-flannel-456743]: docker network inspect custom-flannel-456743: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-456743 not found
	I1101 10:22:50.426083  805154 network_create.go:289] output of [docker network inspect custom-flannel-456743]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-456743 not found
	
	** /stderr **
	I1101 10:22:50.426199  805154 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:22:50.444799  805154 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-db3052bfa0e7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:6a:af:78:80:46} reservation:<nil>}
	I1101 10:22:50.445545  805154 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-99d2741e1e59 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:99:ce:91:38:1c} reservation:<nil>}
	I1101 10:22:50.446275  805154 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a696a61d1319 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:f0:66:2c:aa:f2} reservation:<nil>}
	I1101 10:22:50.446931  805154 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0fdd894de01b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:09:d4:bc:cb:f6} reservation:<nil>}
	I1101 10:22:50.447724  805154 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f195e0}
	I1101 10:22:50.447753  805154 network_create.go:124] attempt to create docker network custom-flannel-456743 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 10:22:50.447811  805154 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-456743 custom-flannel-456743
	I1101 10:22:50.508786  805154 network_create.go:108] docker network custom-flannel-456743 192.168.85.0/24 created
	I1101 10:22:50.508825  805154 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-456743" container
	I1101 10:22:50.508934  805154 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:22:50.527830  805154 cli_runner.go:164] Run: docker volume create custom-flannel-456743 --label name.minikube.sigs.k8s.io=custom-flannel-456743 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:22:50.548596  805154 oci.go:103] Successfully created a docker volume custom-flannel-456743
	I1101 10:22:50.548723  805154 cli_runner.go:164] Run: docker run --rm --name custom-flannel-456743-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-456743 --entrypoint /usr/bin/test -v custom-flannel-456743:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:22:50.957485  805154 oci.go:107] Successfully prepared a docker volume custom-flannel-456743
	I1101 10:22:50.957549  805154 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:22:50.957585  805154 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:22:50.957663  805154 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-456743:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 01 10:22:18 embed-certs-678014 crio[558]: time="2025-11-01T10:22:18.007032399Z" level=info msg="Started container" PID=1738 containerID=6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz/dashboard-metrics-scraper id=5213f6e5-3861-4962-a56d-ad96b7b4eab4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0f988da58be2a6372cdabee768ba194a378a5710c7cb1a12f81abac133187e2
	Nov 01 10:22:18 embed-certs-678014 crio[558]: time="2025-11-01T10:22:18.969027349Z" level=info msg="Removing container: e402119d9689ac3dac99ec561209fea4106acc4b5c5317ea72c7349fe7cc500b" id=f91f8df2-33b2-407c-86f7-8e7c7dffd2b7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:22:18 embed-certs-678014 crio[558]: time="2025-11-01T10:22:18.981489165Z" level=info msg="Removed container e402119d9689ac3dac99ec561209fea4106acc4b5c5317ea72c7349fe7cc500b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz/dashboard-metrics-scraper" id=f91f8df2-33b2-407c-86f7-8e7c7dffd2b7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.014749988Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9e807c26-7f09-44af-9a0e-a5f5e074b2f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.015924824Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9bfe896b-ca36-475e-806f-84dad8ffd885 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.017295877Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a2f5db01-2190-4c8a-800b-5cb76a516f16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.017494772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.02293917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.023242041Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bc71c9b1fd531010cbfefbac39bb401a8a22b1525bbcf92e8279fb08e01cf533/merged/etc/passwd: no such file or directory"
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.023275059Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bc71c9b1fd531010cbfefbac39bb401a8a22b1525bbcf92e8279fb08e01cf533/merged/etc/group: no such file or directory"
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.02363096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.052238299Z" level=info msg="Created container 5be0079577779c724e1f3452cf44867d403ed275e921781e8467e360c995dfed: kube-system/storage-provisioner/storage-provisioner" id=a2f5db01-2190-4c8a-800b-5cb76a516f16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.053084598Z" level=info msg="Starting container: 5be0079577779c724e1f3452cf44867d403ed275e921781e8467e360c995dfed" id=00962976-fe3c-49ee-80e9-f13cd61b7f58 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:22:35 embed-certs-678014 crio[558]: time="2025-11-01T10:22:35.05555872Z" level=info msg="Started container" PID=1753 containerID=5be0079577779c724e1f3452cf44867d403ed275e921781e8467e360c995dfed description=kube-system/storage-provisioner/storage-provisioner id=00962976-fe3c-49ee-80e9-f13cd61b7f58 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5922f00fb3cd66a2fa3684e0dbd57130a3056f4a5a150f7e26a26b1628c4aaf8
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.853414193Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=12dcc5b2-e4ce-4d73-99d5-e14e0d25197d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.85482214Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f8329794-1436-4117-8768-32840f234a1a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.856237326Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz/dashboard-metrics-scraper" id=b2a17f55-4dc6-4ae5-91e8-83eb2355c1af name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.856414284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.864143412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.864616854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.903619124Z" level=info msg="Created container 9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz/dashboard-metrics-scraper" id=b2a17f55-4dc6-4ae5-91e8-83eb2355c1af name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.904390832Z" level=info msg="Starting container: 9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629" id=71aa64c0-0be4-46ba-bd66-bd13f5d3cf88 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:22:37 embed-certs-678014 crio[558]: time="2025-11-01T10:22:37.906680252Z" level=info msg="Started container" PID=1767 containerID=9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz/dashboard-metrics-scraper id=71aa64c0-0be4-46ba-bd66-bd13f5d3cf88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0f988da58be2a6372cdabee768ba194a378a5710c7cb1a12f81abac133187e2
	Nov 01 10:22:38 embed-certs-678014 crio[558]: time="2025-11-01T10:22:38.030167653Z" level=info msg="Removing container: 6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8" id=6c9dbf8e-1609-402a-99ca-31d4e840fb4f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:22:38 embed-certs-678014 crio[558]: time="2025-11-01T10:22:38.040895846Z" level=info msg="Removed container 6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz/dashboard-metrics-scraper" id=6c9dbf8e-1609-402a-99ca-31d4e840fb4f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9ea3b3518d7a6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   c0f988da58be2       dashboard-metrics-scraper-6ffb444bf9-k2wlz   kubernetes-dashboard
	5be0079577779       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   5922f00fb3cd6       storage-provisioner                          kube-system
	6ff08c3f98900       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   a634e7898d58c       kubernetes-dashboard-855c9754f9-cpmxg        kubernetes-dashboard
	dfa54d1337f69       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   c3d95bd3ca2e3       busybox                                      default
	e9c8510360460       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   1fbb0af0101d9       coredns-66bc5c9577-vlf7q                     kube-system
	090328e2d66c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   5922f00fb3cd6       storage-provisioner                          kube-system
	7b3d50aff9126       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   ad69d5c3b49aa       kindnet-fzb8b                                kube-system
	901ec54f9139c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   c998c19487d95       kube-proxy-tlw2d                             kube-system
	77c8dcd2cdbb1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   d677d7d29066d       kube-apiserver-embed-certs-678014            kube-system
	9882b066954b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   34a2a3c62e39a       etcd-embed-certs-678014                      kube-system
	bb7743b9e3f29       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   faa64f5c421eb       kube-controller-manager-embed-certs-678014   kube-system
	a4e56bd25efad       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   c7882dec93ecc       kube-scheduler-embed-certs-678014            kube-system
	
	
	==> coredns [e9c85103604609c36cfb00de71bfe70f095051d470ae83fe1db5422a8554bc65] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49039 - 18623 "HINFO IN 89422442011442453.521556987964252447. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.035445654s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-678014
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-678014
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=embed-certs-678014
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_20_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:20:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-678014
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:22:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:22:44 +0000   Sat, 01 Nov 2025 10:20:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:22:44 +0000   Sat, 01 Nov 2025 10:20:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:22:44 +0000   Sat, 01 Nov 2025 10:20:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:22:44 +0000   Sat, 01 Nov 2025 10:21:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-678014
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                03d8f849-7655-423d-8ed7-89c54dfab59c
	  Boot ID:                    3c8e0ac0-e864-44f5-a6fa-b3f24d8ccbf5
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-vlf7q                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m19s
	  kube-system                 etcd-embed-certs-678014                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m25s
	  kube-system                 kindnet-fzb8b                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-embed-certs-678014             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-embed-certs-678014    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-tlw2d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-embed-certs-678014             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-k2wlz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cpmxg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m18s              kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m25s              kubelet          Node embed-certs-678014 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m25s              kubelet          Node embed-certs-678014 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m25s              kubelet          Node embed-certs-678014 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m25s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m20s              node-controller  Node embed-certs-678014 event: Registered Node embed-certs-678014 in Controller
	  Normal  NodeReady                98s                kubelet          Node embed-certs-678014 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node embed-certs-678014 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node embed-certs-678014 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node embed-certs-678014 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node embed-certs-678014 event: Registered Node embed-certs-678014 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 e7 bf d4 e5 10 08 06
	[  +6.343144] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 7a 3f b6 6c e2 60 08 06
	[Nov 1 09:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.039683] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023942] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +2.047897] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +4.031692] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[  +8.127542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[ +16.382906] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	[Nov 1 09:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 ec d8 65 1f 0a 0e 43 dd 63 30 99 08 00
	
	
	==> etcd [9882b066954b83924bdc61795f906efe75b16f0dcdb7b9d8bce879789c8743e3] <==
	{"level":"warn","ts":"2025-11-01T10:22:03.014650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.021925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.031155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.039688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.047787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.064783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.072727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:03.079639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:22:13.812433Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.623561ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765876372471723 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/etcd-embed-certs-678014\" mod_revision:583 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-embed-certs-678014\" value_size:5862 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-embed-certs-678014\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:22:13.812671Z","caller":"traceutil/trace.go:172","msg":"trace[549526070] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"239.194657ms","start":"2025-11-01T10:22:13.573432Z","end":"2025-11-01T10:22:13.812626Z","steps":["trace[549526070] 'process raft request'  (duration: 40.744018ms)","trace[549526070] 'compare'  (duration: 197.509449ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:22:14.024008Z","caller":"traceutil/trace.go:172","msg":"trace[1977700606] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"130.813137ms","start":"2025-11-01T10:22:13.893165Z","end":"2025-11-01T10:22:14.023978Z","steps":["trace[1977700606] 'process raft request'  (duration: 40.080541ms)","trace[1977700606] 'compare'  (duration: 90.529129ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:22:14.353006Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"211.343155ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765876372471727 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-678014\" mod_revision:461 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-678014\" value_size:501 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-678014\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:22:14.353121Z","caller":"traceutil/trace.go:172","msg":"trace[249256731] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"292.177346ms","start":"2025-11-01T10:22:14.060930Z","end":"2025-11-01T10:22:14.353107Z","steps":["trace[249256731] 'process raft request'  (duration: 80.651249ms)","trace[249256731] 'compare'  (duration: 211.160219ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:22:14.851418Z","caller":"traceutil/trace.go:172","msg":"trace[2064654972] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"152.121107ms","start":"2025-11-01T10:22:14.699278Z","end":"2025-11-01T10:22:14.851399Z","steps":["trace[2064654972] 'process raft request'  (duration: 151.994072ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:22:15.036198Z","caller":"traceutil/trace.go:172","msg":"trace[1783793252] linearizableReadLoop","detail":"{readStateIndex:624; appliedIndex:624; }","duration":"117.299545ms","start":"2025-11-01T10:22:14.918872Z","end":"2025-11-01T10:22:15.036171Z","steps":["trace[1783793252] 'read index received'  (duration: 117.291682ms)","trace[1783793252] 'applied index is now lower than readState.Index'  (duration: 6.638µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:22:15.036416Z","caller":"traceutil/trace.go:172","msg":"trace[1533323963] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"175.995307ms","start":"2025-11-01T10:22:14.860406Z","end":"2025-11-01T10:22:15.036401Z","steps":["trace[1533323963] 'process raft request'  (duration: 175.842004ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:22:15.036443Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.536223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.94.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-11-01T10:22:15.036491Z","caller":"traceutil/trace.go:172","msg":"trace[975312385] range","detail":"{range_begin:/registry/masterleases/192.168.94.2; range_end:; response_count:1; response_revision:587; }","duration":"117.60814ms","start":"2025-11-01T10:22:14.918863Z","end":"2025-11-01T10:22:15.036471Z","steps":["trace[975312385] 'agreement among raft nodes before linearized reading'  (duration: 117.434235ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:22:44.679084Z","caller":"traceutil/trace.go:172","msg":"trace[822222031] transaction","detail":"{read_only:false; response_revision:642; number_of_response:1; }","duration":"125.117875ms","start":"2025-11-01T10:22:44.553948Z","end":"2025-11-01T10:22:44.679066Z","steps":["trace[822222031] 'process raft request'  (duration: 124.983051ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:22:44.929152Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.064347ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:22:44.929220Z","caller":"traceutil/trace.go:172","msg":"trace[701815087] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:642; }","duration":"155.145243ms","start":"2025-11-01T10:22:44.774060Z","end":"2025-11-01T10:22:44.929206Z","steps":["trace[701815087] 'agreement among raft nodes before linearized reading'  (duration: 66.835659ms)","trace[701815087] 'range keys from in-memory index tree'  (duration: 88.190996ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:22:44.929347Z","caller":"traceutil/trace.go:172","msg":"trace[947219053] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"209.157062ms","start":"2025-11-01T10:22:44.720169Z","end":"2025-11-01T10:22:44.929326Z","steps":["trace[947219053] 'process raft request'  (duration: 120.734261ms)","trace[947219053] 'compare'  (duration: 88.212016ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:22:44.929708Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.585841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:22:44.929776Z","caller":"traceutil/trace.go:172","msg":"trace[1277428426] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:643; }","duration":"101.957795ms","start":"2025-11-01T10:22:44.827806Z","end":"2025-11-01T10:22:44.929764Z","steps":["trace[1277428426] 'agreement among raft nodes before linearized reading'  (duration: 101.564316ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:22:54.756238Z","caller":"traceutil/trace.go:172","msg":"trace[801369266] transaction","detail":"{read_only:false; response_revision:650; number_of_response:1; }","duration":"264.117025ms","start":"2025-11-01T10:22:54.492101Z","end":"2025-11-01T10:22:54.756218Z","steps":["trace[801369266] 'process raft request'  (duration: 263.991977ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:22:59 up  3:05,  0 user,  load average: 4.69, 3.93, 3.06
	Linux embed-certs-678014 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7b3d50aff91266580f138509e805b375d6b764cfe7138fdc0bb1b3780d21f7e0] <==
	I1101 10:22:04.509110       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:22:04.509446       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 10:22:04.509625       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:22:04.509685       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:22:04.509721       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:22:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:22:04.709762       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:22:04.709783       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:22:04.709794       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:22:04.709920       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:22:05.110153       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:22:05.110180       1 metrics.go:72] Registering metrics
	I1101 10:22:05.207963       1 controller.go:711] "Syncing nftables rules"
	I1101 10:22:14.709771       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:22:14.709827       1 main.go:301] handling current node
	I1101 10:22:24.709457       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:22:24.709518       1 main.go:301] handling current node
	I1101 10:22:34.709604       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:22:34.709638       1 main.go:301] handling current node
	I1101 10:22:44.709890       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:22:44.709929       1 main.go:301] handling current node
	I1101 10:22:54.709189       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:22:54.709243       1 main.go:301] handling current node
	
	
	==> kube-apiserver [77c8dcd2cdbb15ad48c01e45cd25792e208735c6eda9f44bc1fa9ab853e0081c] <==
	I1101 10:22:03.714114       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:22:03.714404       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:22:03.719099       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:22:03.721217       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:22:03.721285       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:22:03.721296       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:22:03.721304       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:22:03.721310       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:22:03.737697       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:22:03.780405       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:22:03.789387       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:22:03.789437       1 policy_source.go:240] refreshing policies
	I1101 10:22:03.792628       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:22:03.955945       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:22:04.094902       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:22:04.149800       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:22:04.197329       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:22:04.217558       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:22:04.299963       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.160.202"}
	I1101 10:22:04.314150       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.12.253"}
	I1101 10:22:04.608396       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:22:07.065245       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:22:07.462771       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:22:07.462771       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:22:07.662984       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bb7743b9e3f295728cb34054b001eac220d6549f08d9f5e304789213cc644bae] <==
	I1101 10:22:07.043214       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:22:07.058678       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:22:07.058699       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:22:07.058762       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:22:07.058783       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:22:07.058883       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:22:07.058916       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:22:07.059102       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:22:07.059211       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-678014"
	I1101 10:22:07.059246       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:22:07.059256       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:22:07.059265       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:22:07.059257       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:22:07.060401       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:22:07.060441       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:22:07.062745       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:22:07.064817       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:22:07.066110       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:22:07.067199       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:22:07.068972       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:22:07.071897       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:22:07.077579       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:22:07.077606       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:22:07.077620       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:22:07.082943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [901ec54f9139c34f1066587c7237ab3984a2c279347d55a1d0b038574bbca217] <==
	I1101 10:22:04.297309       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:22:04.366040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:22:04.466788       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:22:04.466825       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 10:22:04.466929       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:22:04.485729       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:22:04.485790       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:22:04.491329       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:22:04.491747       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:22:04.491775       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:22:04.493409       1 config.go:200] "Starting service config controller"
	I1101 10:22:04.493443       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:22:04.493482       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:22:04.493488       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:22:04.493501       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:22:04.493526       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:22:04.493605       1 config.go:309] "Starting node config controller"
	I1101 10:22:04.493667       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:22:04.493677       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:22:04.593632       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:22:04.593652       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:22:04.593657       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a4e56bd25efad002d1eb660d328f3fda9e93ba58bb33f2e388635b902755f1e9] <==
	I1101 10:22:02.124868       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:22:03.643970       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:22:03.644040       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:22:03.644057       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:22:03.644067       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:22:03.682662       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:22:03.682698       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:22:03.689887       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:22:03.689935       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:22:03.691115       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:22:03.691198       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 10:22:03.694784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 10:22:03.696643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1101 10:22:04.990668       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:22:07 embed-certs-678014 kubelet[718]: I1101 10:22:07.673348     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcvqw\" (UniqueName: \"kubernetes.io/projected/e5f61d2c-8a94-4c09-a691-33ae048466f1-kube-api-access-xcvqw\") pod \"dashboard-metrics-scraper-6ffb444bf9-k2wlz\" (UID: \"e5f61d2c-8a94-4c09-a691-33ae048466f1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz"
	Nov 01 10:22:07 embed-certs-678014 kubelet[718]: I1101 10:22:07.673400     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsdwn\" (UniqueName: \"kubernetes.io/projected/6d549260-f10c-4681-8da0-9ae59df674d3-kube-api-access-bsdwn\") pod \"kubernetes-dashboard-855c9754f9-cpmxg\" (UID: \"6d549260-f10c-4681-8da0-9ae59df674d3\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cpmxg"
	Nov 01 10:22:07 embed-certs-678014 kubelet[718]: I1101 10:22:07.673437     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6d549260-f10c-4681-8da0-9ae59df674d3-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-cpmxg\" (UID: \"6d549260-f10c-4681-8da0-9ae59df674d3\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cpmxg"
	Nov 01 10:22:07 embed-certs-678014 kubelet[718]: I1101 10:22:07.673514     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e5f61d2c-8a94-4c09-a691-33ae048466f1-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-k2wlz\" (UID: \"e5f61d2c-8a94-4c09-a691-33ae048466f1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz"
	Nov 01 10:22:09 embed-certs-678014 kubelet[718]: I1101 10:22:09.639985     718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:22:15 embed-certs-678014 kubelet[718]: I1101 10:22:15.975867     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cpmxg" podStartSLOduration=2.198900274 podStartE2EDuration="8.975828652s" podCreationTimestamp="2025-11-01 10:22:07 +0000 UTC" firstStartedPulling="2025-11-01 10:22:07.919638884 +0000 UTC m=+7.168674927" lastFinishedPulling="2025-11-01 10:22:14.696567258 +0000 UTC m=+13.945603305" observedRunningTime="2025-11-01 10:22:15.974731298 +0000 UTC m=+15.223767362" watchObservedRunningTime="2025-11-01 10:22:15.975828652 +0000 UTC m=+15.224864718"
	Nov 01 10:22:17 embed-certs-678014 kubelet[718]: I1101 10:22:17.961960     718 scope.go:117] "RemoveContainer" containerID="e402119d9689ac3dac99ec561209fea4106acc4b5c5317ea72c7349fe7cc500b"
	Nov 01 10:22:18 embed-certs-678014 kubelet[718]: I1101 10:22:18.967344     718 scope.go:117] "RemoveContainer" containerID="e402119d9689ac3dac99ec561209fea4106acc4b5c5317ea72c7349fe7cc500b"
	Nov 01 10:22:18 embed-certs-678014 kubelet[718]: I1101 10:22:18.967549     718 scope.go:117] "RemoveContainer" containerID="6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8"
	Nov 01 10:22:18 embed-certs-678014 kubelet[718]: E1101 10:22:18.967767     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2wlz_kubernetes-dashboard(e5f61d2c-8a94-4c09-a691-33ae048466f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz" podUID="e5f61d2c-8a94-4c09-a691-33ae048466f1"
	Nov 01 10:22:19 embed-certs-678014 kubelet[718]: I1101 10:22:19.972075     718 scope.go:117] "RemoveContainer" containerID="6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8"
	Nov 01 10:22:19 embed-certs-678014 kubelet[718]: E1101 10:22:19.972236     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2wlz_kubernetes-dashboard(e5f61d2c-8a94-4c09-a691-33ae048466f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz" podUID="e5f61d2c-8a94-4c09-a691-33ae048466f1"
	Nov 01 10:22:26 embed-certs-678014 kubelet[718]: I1101 10:22:26.623774     718 scope.go:117] "RemoveContainer" containerID="6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8"
	Nov 01 10:22:26 embed-certs-678014 kubelet[718]: E1101 10:22:26.624064     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2wlz_kubernetes-dashboard(e5f61d2c-8a94-4c09-a691-33ae048466f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz" podUID="e5f61d2c-8a94-4c09-a691-33ae048466f1"
	Nov 01 10:22:35 embed-certs-678014 kubelet[718]: I1101 10:22:35.014295     718 scope.go:117] "RemoveContainer" containerID="090328e2d66c9eab8a50d6179bde736e4e3c793c38917b3d82a09df65c4b1ee2"
	Nov 01 10:22:37 embed-certs-678014 kubelet[718]: I1101 10:22:37.852428     718 scope.go:117] "RemoveContainer" containerID="6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8"
	Nov 01 10:22:38 embed-certs-678014 kubelet[718]: I1101 10:22:38.028701     718 scope.go:117] "RemoveContainer" containerID="6f51017558ce0c778ad533dde703ba295b7061152bce9901a3855219388113c8"
	Nov 01 10:22:38 embed-certs-678014 kubelet[718]: I1101 10:22:38.028928     718 scope.go:117] "RemoveContainer" containerID="9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629"
	Nov 01 10:22:38 embed-certs-678014 kubelet[718]: E1101 10:22:38.029148     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2wlz_kubernetes-dashboard(e5f61d2c-8a94-4c09-a691-33ae048466f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz" podUID="e5f61d2c-8a94-4c09-a691-33ae048466f1"
	Nov 01 10:22:46 embed-certs-678014 kubelet[718]: I1101 10:22:46.623370     718 scope.go:117] "RemoveContainer" containerID="9ea3b3518d7a664eaa10426edbfb0e91421499b2865838a5a7d32c9d0b989629"
	Nov 01 10:22:46 embed-certs-678014 kubelet[718]: E1101 10:22:46.623620     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2wlz_kubernetes-dashboard(e5f61d2c-8a94-4c09-a691-33ae048466f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2wlz" podUID="e5f61d2c-8a94-4c09-a691-33ae048466f1"
	Nov 01 10:22:53 embed-certs-678014 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:22:53 embed-certs-678014 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:22:53 embed-certs-678014 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:22:53 embed-certs-678014 systemd[1]: kubelet.service: Consumed 1.841s CPU time.
	
	
	==> kubernetes-dashboard [6ff08c3f9890015052d9adbb802e29cfd38776e9a69671ca1aacbe3ea7955d0a] <==
	2025/11/01 10:22:15 Using namespace: kubernetes-dashboard
	2025/11/01 10:22:15 Using in-cluster config to connect to apiserver
	2025/11/01 10:22:15 Using secret token for csrf signing
	2025/11/01 10:22:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:22:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:22:15 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:22:15 Generating JWE encryption key
	2025/11/01 10:22:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:22:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:22:15 Initializing JWE encryption key from synchronized object
	2025/11/01 10:22:15 Creating in-cluster Sidecar client
	2025/11/01 10:22:15 Serving insecurely on HTTP port: 9090
	2025/11/01 10:22:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:22:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:22:15 Starting overwatch
	
	
	==> storage-provisioner [090328e2d66c9eab8a50d6179bde736e4e3c793c38917b3d82a09df65c4b1ee2] <==
	I1101 10:22:04.261751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:22:34.269278       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5be0079577779c724e1f3452cf44867d403ed275e921781e8467e360c995dfed] <==
	I1101 10:22:35.069248       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:22:35.078416       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:22:35.078478       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:22:35.081230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:38.536742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:42.797580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:46.396387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:49.450431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:52.472645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:52.478162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:22:52.478354       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:22:52.478431       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a8bf23cc-2536-4ed5-ae0e-07000c30e5da", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-678014_0e4427db-d3c3-4189-9760-17e140666d54 became leader
	I1101 10:22:52.478520       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-678014_0e4427db-d3c3-4189-9760-17e140666d54!
	W1101 10:22:52.480808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:52.485211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:22:52.579573       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-678014_0e4427db-d3c3-4189-9760-17e140666d54!
	W1101 10:22:54.489082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:54.757374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:56.762960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:56.770970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:58.774257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:58.779658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-678014 -n embed-certs-678014
I1101 10:23:00.040391  517687 config.go:182] Loaded profile config "kindnet-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-678014 -n embed-certs-678014: exit status 2 (406.655266ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-678014 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.07s)
E1101 10:24:20.638026  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (262/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 19.82
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.1/json-events 13.92
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.43
21 TestBinaryMirror 1.65
22 TestOffline 51.09
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.28
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
27 TestAddons/Setup 155.22
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 9.45
48 TestAddons/StoppedEnableDisable 18.56
49 TestCertOptions 31.82
50 TestCertExpiration 217.7
52 TestForceSystemdFlag 31.17
53 TestForceSystemdEnv 34.23
58 TestErrorSpam/setup 22.22
59 TestErrorSpam/start 0.72
60 TestErrorSpam/status 1.01
61 TestErrorSpam/pause 6.23
62 TestErrorSpam/unpause 4.85
63 TestErrorSpam/stop 2.64
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.71
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.37
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.72
75 TestFunctional/serial/CacheCmd/cache/add_local 1.97
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 49.67
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.29
86 TestFunctional/serial/LogsFileCmd 1.29
87 TestFunctional/serial/InvalidService 3.86
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 8.97
91 TestFunctional/parallel/DryRun 0.4
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.1
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 29.62
101 TestFunctional/parallel/SSHCmd 0.68
102 TestFunctional/parallel/CpCmd 1.8
103 TestFunctional/parallel/MySQL 19.06
104 TestFunctional/parallel/FileSync 0.43
105 TestFunctional/parallel/CertSync 1.87
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
113 TestFunctional/parallel/License 0.37
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.23
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
129 TestFunctional/parallel/MountCmd/any-port 7.77
130 TestFunctional/parallel/MountCmd/specific-port 1.74
131 TestFunctional/parallel/ImageCommands/ImageListShort 1.11
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
135 TestFunctional/parallel/ImageCommands/ImageBuild 3.75
136 TestFunctional/parallel/ImageCommands/Setup 1.99
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.78
140 TestFunctional/parallel/Version/short 0.08
141 TestFunctional/parallel/Version/components 0.61
144 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
150 TestFunctional/parallel/ServiceCmd/List 1.72
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.72
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 174.05
163 TestMultiControlPlane/serial/DeployApp 6.48
164 TestMultiControlPlane/serial/PingHostFromPods 1.1
165 TestMultiControlPlane/serial/AddWorkerNode 57.32
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
168 TestMultiControlPlane/serial/CopyFile 17.87
169 TestMultiControlPlane/serial/StopSecondaryNode 13.39
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.93
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.94
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 100.41
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.63
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
176 TestMultiControlPlane/serial/StopCluster 41.82
177 TestMultiControlPlane/serial/RestartCluster 54.66
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
179 TestMultiControlPlane/serial/AddSecondaryNode 47.4
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
185 TestJSONOutput/start/Command 38.06
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.13
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 38.1
211 TestKicCustomNetwork/use_default_bridge_network 23.59
212 TestKicExistingNetwork 27.93
213 TestKicCustomSubnet 27.35
214 TestKicStaticIP 23.82
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 47.17
219 TestMountStart/serial/StartWithMountFirst 6.08
220 TestMountStart/serial/VerifyMountFirst 0.29
221 TestMountStart/serial/StartWithMountSecond 8.65
222 TestMountStart/serial/VerifyMountSecond 0.29
223 TestMountStart/serial/DeleteFirst 1.75
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.27
226 TestMountStart/serial/RestartStopped 8.2
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 64.5
231 TestMultiNode/serial/DeployApp2Nodes 4.58
232 TestMultiNode/serial/PingHostFrom2Pods 0.75
233 TestMultiNode/serial/AddNode 24.23
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.68
236 TestMultiNode/serial/CopyFile 10.18
237 TestMultiNode/serial/StopNode 2.31
238 TestMultiNode/serial/StartAfterStop 7.46
239 TestMultiNode/serial/RestartKeepsNodes 74.81
240 TestMultiNode/serial/DeleteNode 5.29
241 TestMultiNode/serial/StopMultiNode 28.63
242 TestMultiNode/serial/RestartMultiNode 28.94
243 TestMultiNode/serial/ValidateNameConflict 24.11
250 TestScheduledStopUnix 98.35
253 TestInsufficientStorage 9.98
254 TestRunningBinaryUpgrade 56.21
256 TestKubernetesUpgrade 302.06
257 TestMissingContainerUpgrade 105.11
258 TestStoppedBinaryUpgrade/Setup 3.05
260 TestPause/serial/Start 63.56
261 TestStoppedBinaryUpgrade/Upgrade 73.73
262 TestPause/serial/SecondStartNoReconfiguration 6.57
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.24
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/StartWithK8s 32.84
282 TestNetworkPlugins/group/false 6.17
283 TestNoKubernetes/serial/StartWithStopK8s 31.35
287 TestNoKubernetes/serial/Start 6.26
288 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
289 TestNoKubernetes/serial/ProfileList 1.86
290 TestNoKubernetes/serial/Stop 1.29
291 TestNoKubernetes/serial/StartNoArgs 7.77
292 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
294 TestStartStop/group/old-k8s-version/serial/FirstStart 50.87
296 TestStartStop/group/no-preload/serial/FirstStart 51.56
297 TestStartStop/group/old-k8s-version/serial/DeployApp 9.26
298 TestStartStop/group/no-preload/serial/DeployApp 9.23
300 TestStartStop/group/old-k8s-version/serial/Stop 16.2
302 TestStartStop/group/no-preload/serial/Stop 16.27
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
304 TestStartStop/group/old-k8s-version/serial/SecondStart 44.76
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
306 TestStartStop/group/no-preload/serial/SecondStart 46.78
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
309 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
313 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
316 TestStartStop/group/embed-certs/serial/FirstStart 71.4
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.43
320 TestStartStop/group/newest-cni/serial/FirstStart 27.4
321 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/Stop 2.5
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
326 TestStartStop/group/newest-cni/serial/SecondStart 11.25
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.17
329 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
333 TestStartStop/group/embed-certs/serial/DeployApp 9.28
334 TestNetworkPlugins/group/auto/Start 41.03
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.49
338 TestStartStop/group/embed-certs/serial/Stop 16.77
339 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
340 TestStartStop/group/embed-certs/serial/SecondStart 48.45
341 TestNetworkPlugins/group/kindnet/Start 45.56
342 TestNetworkPlugins/group/auto/KubeletFlags 0.34
343 TestNetworkPlugins/group/auto/NetCatPod 9.26
344 TestNetworkPlugins/group/auto/DNS 0.11
345 TestNetworkPlugins/group/auto/Localhost 0.1
346 TestNetworkPlugins/group/auto/HairPin 0.1
347 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
348 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
349 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
351 TestNetworkPlugins/group/calico/Start 54.83
352 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
354 TestNetworkPlugins/group/custom-flannel/Start 54.29
355 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
360 TestNetworkPlugins/group/enable-default-cni/Start 66.92
361 TestNetworkPlugins/group/kindnet/DNS 0.19
362 TestNetworkPlugins/group/kindnet/Localhost 0.15
363 TestNetworkPlugins/group/kindnet/HairPin 0.16
364 TestNetworkPlugins/group/flannel/Start 49.8
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/KubeletFlags 0.32
367 TestNetworkPlugins/group/calico/NetCatPod 8.19
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.19
370 TestNetworkPlugins/group/calico/DNS 0.12
371 TestNetworkPlugins/group/calico/Localhost 0.1
372 TestNetworkPlugins/group/calico/HairPin 0.1
373 TestNetworkPlugins/group/custom-flannel/DNS 0.12
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
376 TestNetworkPlugins/group/bridge/Start 42.88
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.22
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
384 TestNetworkPlugins/group/flannel/NetCatPod 9.21
385 TestNetworkPlugins/group/flannel/DNS 0.13
386 TestNetworkPlugins/group/flannel/Localhost 0.1
387 TestNetworkPlugins/group/flannel/HairPin 0.12
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
389 TestNetworkPlugins/group/bridge/NetCatPod 9.2
390 TestNetworkPlugins/group/bridge/DNS 0.12
391 TestNetworkPlugins/group/bridge/Localhost 0.09
392 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (19.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-314542 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-314542 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (19.822357854s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (19.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 09:28:32.156282  517687 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1101 09:28:32.156392  517687 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-314542
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-314542: exit status 85 (79.418569ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-314542 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-314542 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:28:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:28:12.390891  517698 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:28:12.391145  517698 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:12.391153  517698 out.go:374] Setting ErrFile to fd 2...
	I1101 09:28:12.391157  517698 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:12.391370  517698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	W1101 09:28:12.391494  517698 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21832-514161/.minikube/config/config.json: open /home/jenkins/minikube-integration/21832-514161/.minikube/config/config.json: no such file or directory
	I1101 09:28:12.392025  517698 out.go:368] Setting JSON to true
	I1101 09:28:12.393010  517698 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7829,"bootTime":1761981463,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:28:12.393102  517698 start.go:143] virtualization: kvm guest
	I1101 09:28:12.394891  517698 out.go:99] [download-only-314542] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:28:12.395019  517698 notify.go:221] Checking for updates...
	W1101 09:28:12.395030  517698 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 09:28:12.395909  517698 out.go:171] MINIKUBE_LOCATION=21832
	I1101 09:28:12.396954  517698 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:28:12.397975  517698 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 09:28:12.398954  517698 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 09:28:12.399875  517698 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 09:28:12.401395  517698 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 09:28:12.401751  517698 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:28:12.426216  517698 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:28:12.426344  517698 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:12.485961  517698 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-01 09:28:12.47613384 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:28:12.486131  517698 docker.go:319] overlay module found
	I1101 09:28:12.487479  517698 out.go:99] Using the docker driver based on user configuration
	I1101 09:28:12.487515  517698 start.go:309] selected driver: docker
	I1101 09:28:12.487530  517698 start.go:930] validating driver "docker" against <nil>
	I1101 09:28:12.487662  517698 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:12.546154  517698 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-01 09:28:12.536050038 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:28:12.546391  517698 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:28:12.546916  517698 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1101 09:28:12.547142  517698 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:28:12.548527  517698 out.go:171] Using Docker driver with root privileges
	I1101 09:28:12.549443  517698 cni.go:84] Creating CNI manager for ""
	I1101 09:28:12.549524  517698 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:28:12.549543  517698 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:28:12.549620  517698 start.go:353] cluster config:
	{Name:download-only-314542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-314542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:28:12.550635  517698 out.go:99] Starting "download-only-314542" primary control-plane node in "download-only-314542" cluster
	I1101 09:28:12.550652  517698 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:28:12.551505  517698 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:28:12.551532  517698 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:28:12.551641  517698 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:28:12.568892  517698 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:28:12.569065  517698 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:28:12.569153  517698 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:28:12.918864  517698 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:28:12.918893  517698 cache.go:59] Caching tarball of preloaded images
	I1101 09:28:12.919051  517698 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:28:12.920651  517698 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 09:28:12.920666  517698 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 09:28:13.034321  517698 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1101 09:28:13.034441  517698 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:28:24.012798  517698 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 09:28:24.013214  517698 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/download-only-314542/config.json ...
	I1101 09:28:24.013246  517698 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/download-only-314542/config.json: {Name:mk214ce0104aebb195ce09a5876d217d16bd1929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:28:24.013456  517698 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:28:24.013634  517698 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-314542 host does not exist
	  To start a cluster, run: "minikube start -p download-only-314542"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-314542
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (13.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-762265 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-762265 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.920710708s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (13.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 09:28:46.552926  517687 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 09:28:46.552987  517687 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-762265
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-762265: exit status 85 (78.886511ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-314542 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-314542 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ delete  │ -p download-only-314542                                                                                                                                                   │ download-only-314542 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │ 01 Nov 25 09:28 UTC │
	│ start   │ -o=json --download-only -p download-only-762265 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-762265 │ jenkins │ v1.37.0 │ 01 Nov 25 09:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:28:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:28:32.688452  518097 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:28:32.688735  518097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:32.688745  518097 out.go:374] Setting ErrFile to fd 2...
	I1101 09:28:32.688749  518097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:32.688998  518097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:28:32.689489  518097 out.go:368] Setting JSON to true
	I1101 09:28:32.690477  518097 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7850,"bootTime":1761981463,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:28:32.690585  518097 start.go:143] virtualization: kvm guest
	I1101 09:28:32.692349  518097 out.go:99] [download-only-762265] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:28:32.692536  518097 notify.go:221] Checking for updates...
	I1101 09:28:32.693683  518097 out.go:171] MINIKUBE_LOCATION=21832
	I1101 09:28:32.694726  518097 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:28:32.695944  518097 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 09:28:32.697108  518097 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 09:28:32.698263  518097 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 09:28:32.700289  518097 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 09:28:32.700549  518097 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:28:32.724514  518097 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:28:32.724635  518097 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:32.785986  518097 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-01 09:28:32.775820044 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:28:32.786102  518097 docker.go:319] overlay module found
	I1101 09:28:32.787863  518097 out.go:99] Using the docker driver based on user configuration
	I1101 09:28:32.787902  518097 start.go:309] selected driver: docker
	I1101 09:28:32.787915  518097 start.go:930] validating driver "docker" against <nil>
	I1101 09:28:32.788033  518097 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:32.846531  518097 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-01 09:28:32.836758957 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:28:32.846718  518097 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:28:32.847286  518097 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1101 09:28:32.847462  518097 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:28:32.848990  518097 out.go:171] Using Docker driver with root privileges
	I1101 09:28:32.850088  518097 cni.go:84] Creating CNI manager for ""
	I1101 09:28:32.850171  518097 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:28:32.850187  518097 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:28:32.850275  518097 start.go:353] cluster config:
	{Name:download-only-762265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-762265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:28:32.851561  518097 out.go:99] Starting "download-only-762265" primary control-plane node in "download-only-762265" cluster
	I1101 09:28:32.851577  518097 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:28:32.852625  518097 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:28:32.852654  518097 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:28:32.852776  518097 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:28:32.868793  518097 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:28:32.868969  518097 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:28:32.868991  518097 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 09:28:32.868997  518097 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 09:28:32.869009  518097 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 09:28:32.960477  518097 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:28:32.960514  518097 cache.go:59] Caching tarball of preloaded images
	I1101 09:28:32.960702  518097 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:28:32.962499  518097 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1101 09:28:32.962533  518097 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 09:28:33.079771  518097 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1101 09:28:33.079856  518097 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21832-514161/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-762265 host does not exist
	  To start a cluster, run: "minikube start -p download-only-762265"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-762265
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-887018 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-887018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-887018
--- PASS: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestBinaryMirror (1.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 09:28:47.747312  517687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-679292 --alsologtostderr --binary-mirror http://127.0.0.1:45711 --driver=docker  --container-runtime=crio
aaa_download_only_test.go:309: (dbg) Done: out/minikube-linux-amd64 start --download-only -p binary-mirror-679292 --alsologtostderr --binary-mirror http://127.0.0.1:45711 --driver=docker  --container-runtime=crio: (1.005205734s)
helpers_test.go:175: Cleaning up "binary-mirror-679292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-679292
--- PASS: TestBinaryMirror (1.65s)

                                                
                                    
x
+
TestOffline (51.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-286433 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-286433 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (48.504232223s)
helpers_test.go:175: Cleaning up "offline-crio-286433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-286433
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-286433: (2.585905605s)
--- PASS: TestOffline (51.09s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-050432
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-050432: exit status 85 (280.326549ms)

                                                
                                                
-- stdout --
	* Profile "addons-050432" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-050432"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-050432
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-050432: exit status 85 (217.717383ms)

                                                
                                                
-- stdout --
	* Profile "addons-050432" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-050432"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (155.22s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-050432 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-050432 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m35.215119207s)
--- PASS: TestAddons/Setup (155.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-050432 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-050432 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-050432 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-050432 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c7b2f1d0-324a-4e98-abb8-fc235b03a574] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c7b2f1d0-324a-4e98-abb8-fc235b03a574] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003786098s
addons_test.go:694: (dbg) Run:  kubectl --context addons-050432 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-050432 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-050432 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.56s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-050432
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-050432: (18.261112041s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-050432
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-050432
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-050432
--- PASS: TestAddons/StoppedEnableDisable (18.56s)

                                                
                                    
x
+
TestCertOptions (31.82s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-278823 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-278823 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (28.351606466s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-278823 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-278823 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-278823 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-278823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-278823
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-278823: (2.698045901s)
--- PASS: TestCertOptions (31.82s)

                                                
                                    
x
+
TestCertExpiration (217.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-577441 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-577441 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.108359412s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-577441 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-577441 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (8.980836089s)
helpers_test.go:175: Cleaning up "cert-expiration-577441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-577441
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-577441: (2.605646152s)
--- PASS: TestCertExpiration (217.70s)

                                                
                                    
x
+
TestForceSystemdFlag (31.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-767379 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-767379 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.674833072s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-767379 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-767379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-767379
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-767379: (5.110850188s)
--- PASS: TestForceSystemdFlag (31.17s)

                                                
                                    
x
+
TestForceSystemdEnv (34.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-482102 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-482102 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.304483547s)
helpers_test.go:175: Cleaning up "force-systemd-env-482102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-482102
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-482102: (4.928310491s)
--- PASS: TestForceSystemdEnv (34.23s)

                                                
                                    
x
+
TestErrorSpam/setup (22.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-553219 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-553219 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-553219 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-553219 --driver=docker  --container-runtime=crio: (22.220570945s)
--- PASS: TestErrorSpam/setup (22.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (6.23s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 pause: exit status 80 (2.091210723s)

                                                
                                                
-- stdout --
	* Pausing node nospam-553219 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 pause: exit status 80 (2.071082023s)

                                                
                                                
-- stdout --
	* Pausing node nospam-553219 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 pause: exit status 80 (2.066672364s)

                                                
                                                
-- stdout --
	* Pausing node nospam-553219 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.23s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 unpause: exit status 80 (1.814762675s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-553219 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 unpause: exit status 80 (1.486778163s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-553219 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 unpause: exit status 80 (1.543191876s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-553219 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:35:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (4.85s)

                                                
                                    
x
+
TestErrorSpam/stop (2.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 stop: (2.420829273s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-553219 --log_dir /tmp/nospam-553219 stop
--- PASS: TestErrorSpam/stop (2.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21832-514161/.minikube/files/etc/test/nested/copy/517687/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-593346 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-593346 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.708406435s)
--- PASS: TestFunctional/serial/StartWithProxy (37.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 09:35:59.949096  517687 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-593346 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-593346 --alsologtostderr -v=8: (6.367143206s)
functional_test.go:678: soft start took 6.367893592s for "functional-593346" cluster.
I1101 09:36:06.316666  517687 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-593346 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-593346 /tmp/TestFunctionalserialCacheCmdcacheadd_local981239484/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 cache add minikube-local-cache-test:functional-593346
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-593346 cache add minikube-local-cache-test:functional-593346: (1.635649102s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 cache delete minikube-local-cache-test:functional-593346
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-593346
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593346 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (294.292417ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 kubectl -- --context functional-593346 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-593346 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-593346 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1101 09:36:25.502677  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:25.509135  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:25.520535  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:25.541992  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:25.583421  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:25.664893  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:25.826406  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:26.148110  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:26.790166  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:28.071758  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:30.634652  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:35.756390  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:45.998044  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-593346 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.666853584s)
functional_test.go:776: restart took 49.666991484s for "functional-593346" cluster.
I1101 09:37:03.163687  517687 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (49.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-593346 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-593346 logs: (1.28869056s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 logs --file /tmp/TestFunctionalserialLogsFileCmd4200896620/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-593346 logs --file /tmp/TestFunctionalserialLogsFileCmd4200896620/001/logs.txt: (1.29069559s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.86s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-593346 apply -f testdata/invalidsvc.yaml
E1101 09:37:06.480449  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-593346
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-593346: exit status 115 (357.592861ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32010 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-593346 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593346 config get cpus: exit status 14 (102.775931ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593346 config get cpus: exit status 14 (86.251181ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-593346 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-593346 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 556521: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-593346 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-593346 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (176.894879ms)

                                                
                                                
-- stdout --
	* [functional-593346] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:37:40.051676  554344 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:37:40.051986  554344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:37:40.051997  554344 out.go:374] Setting ErrFile to fd 2...
	I1101 09:37:40.052001  554344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:37:40.052220  554344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:37:40.052690  554344 out.go:368] Setting JSON to false
	I1101 09:37:40.053732  554344 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8397,"bootTime":1761981463,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:37:40.053832  554344 start.go:143] virtualization: kvm guest
	I1101 09:37:40.055543  554344 out.go:179] * [functional-593346] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:37:40.056585  554344 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 09:37:40.056587  554344 notify.go:221] Checking for updates...
	I1101 09:37:40.060227  554344 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:37:40.061137  554344 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 09:37:40.062134  554344 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 09:37:40.063039  554344 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:37:40.063928  554344 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:37:40.065365  554344 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:37:40.065934  554344 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:37:40.089876  554344 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:37:40.089987  554344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:37:40.149277  554344 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 09:37:40.138701583 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:37:40.149387  554344 docker.go:319] overlay module found
	I1101 09:37:40.150881  554344 out.go:179] * Using the docker driver based on existing profile
	I1101 09:37:40.151856  554344 start.go:309] selected driver: docker
	I1101 09:37:40.151876  554344 start.go:930] validating driver "docker" against &{Name:functional-593346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-593346 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:37:40.152005  554344 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:37:40.153492  554344 out.go:203] 
	W1101 09:37:40.154576  554344 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 09:37:40.155627  554344 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-593346 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-593346 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-593346 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (176.989951ms)

                                                
                                                
-- stdout --
	* [functional-593346] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:37:40.449290  554618 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:37:40.449415  554618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:37:40.449425  554618 out.go:374] Setting ErrFile to fd 2...
	I1101 09:37:40.449432  554618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:37:40.449796  554618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:37:40.450344  554618 out.go:368] Setting JSON to false
	I1101 09:37:40.451347  554618 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8397,"bootTime":1761981463,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:37:40.451449  554618 start.go:143] virtualization: kvm guest
	I1101 09:37:40.453100  554618 out.go:179] * [functional-593346] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 09:37:40.454676  554618 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 09:37:40.454689  554618 notify.go:221] Checking for updates...
	I1101 09:37:40.456864  554618 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:37:40.457964  554618 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 09:37:40.459008  554618 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 09:37:40.460018  554618 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:37:40.464316  554618 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:37:40.465738  554618 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:37:40.466327  554618 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:37:40.489953  554618 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:37:40.490069  554618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:37:40.551203  554618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 09:37:40.540249332 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:37:40.551388  554618 docker.go:319] overlay module found
	I1101 09:37:40.553518  554618 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 09:37:40.554541  554618 start.go:309] selected driver: docker
	I1101 09:37:40.554559  554618 start.go:930] validating driver "docker" against &{Name:functional-593346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-593346 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:37:40.554686  554618 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:37:40.556319  554618 out.go:203] 
	W1101 09:37:40.557307  554618 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 09:37:40.558188  554618 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [9c494481-440a-4cd4-95ac-7d8ef10d3804] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004832845s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-593346 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-593346 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-593346 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-593346 apply -f testdata/storage-provisioner/pod.yaml
I1101 09:37:16.850520  517687 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7073a3ea-f952-4fdd-83ed-6da720ebe06a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7073a3ea-f952-4fdd-83ed-6da720ebe06a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004069053s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-593346 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-593346 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-593346 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [47079d89-dc20-43db-96f2-c36394380e53] Pending
helpers_test.go:352: "sp-pod" [47079d89-dc20-43db-96f2-c36394380e53] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [47079d89-dc20-43db-96f2-c36394380e53] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004858056s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-593346 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh -n functional-593346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 cp functional-593346:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3691132561/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh -n functional-593346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh -n functional-593346 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-593346 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-d4mbk" [696d2978-f70f-4a89-a338-0954219b3c4c] Pending
helpers_test.go:352: "mysql-5bb876957f-d4mbk" [696d2978-f70f-4a89-a338-0954219b3c4c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-d4mbk" [696d2978-f70f-4a89-a338-0954219b3c4c] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.003556479s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-593346 exec mysql-5bb876957f-d4mbk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-593346 exec mysql-5bb876957f-d4mbk -- mysql -ppassword -e "show databases;": exit status 1 (90.96666ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 09:37:27.956395  517687 retry.go:31] will retry after 625.218513ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-593346 exec mysql-5bb876957f-d4mbk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-593346 exec mysql-5bb876957f-d4mbk -- mysql -ppassword -e "show databases;": exit status 1 (89.397594ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 09:37:28.671435  517687 retry.go:31] will retry after 1.99485623s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-593346 exec mysql-5bb876957f-d4mbk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/517687/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "sudo cat /etc/test/nested/copy/517687/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/517687.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "sudo cat /etc/ssl/certs/517687.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/517687.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "sudo cat /usr/share/ca-certificates/517687.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5176872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "sudo cat /etc/ssl/certs/5176872.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5176872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "sudo cat /usr/share/ca-certificates/5176872.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-593346 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593346 ssh "sudo systemctl is-active docker": exit status 1 (303.898467ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593346 ssh "sudo systemctl is-active containerd": exit status 1 (299.351935ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-593346 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-593346 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-593346 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-593346 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 550581: os: process already finished
helpers_test.go:519: unable to terminate pid 550277: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-593346 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-593346 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [3a4b4a2f-995c-4c8c-8ccb-ed273c4b211d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [3a4b4a2f-995c-4c8c-8ccb-ed273c4b211d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004236035s
I1101 09:37:19.842936  517687 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-593346 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.138.125 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-593346 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "355.449665ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.341299ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "354.329853ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "71.786263ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-593346 /tmp/TestFunctionalparallelMountCmdany-port1125882057/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761989852043985228" to /tmp/TestFunctionalparallelMountCmdany-port1125882057/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761989852043985228" to /tmp/TestFunctionalparallelMountCmdany-port1125882057/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761989852043985228" to /tmp/TestFunctionalparallelMountCmdany-port1125882057/001/test-1761989852043985228
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593346 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (315.716827ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:37:32.360019  517687 retry.go:31] will retry after 432.533261ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "findmnt -T /mount-9p | grep 9p"
I1101 09:37:32.864849  517687 detect.go:223] nested VM detected
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 09:37 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 09:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 09:37 test-1761989852043985228
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh cat /mount-9p/test-1761989852043985228
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-593346 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [8cab8264-f5b4-4830-9a24-dc8836276c0f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [8cab8264-f5b4-4830-9a24-dc8836276c0f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [8cab8264-f5b4-4830-9a24-dc8836276c0f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004059972s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-593346 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-593346 /tmp/TestFunctionalparallelMountCmdany-port1125882057/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-593346 /tmp/TestFunctionalparallelMountCmdspecific-port2333818053/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593346 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (316.415073ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:37:40.135337  517687 retry.go:31] will retry after 361.537583ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-593346 /tmp/TestFunctionalparallelMountCmdspecific-port2333818053/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593346 ssh "sudo umount -f /mount-9p": exit status 1 (280.051253ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-593346 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-593346 /tmp/TestFunctionalparallelMountCmdspecific-port2333818053/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-593346 image ls --format short --alsologtostderr: (1.113260386s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-593346 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-593346 image ls --format short --alsologtostderr:
I1101 09:37:50.319019  558003 out.go:360] Setting OutFile to fd 1 ...
I1101 09:37:50.319385  558003 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:37:50.319396  558003 out.go:374] Setting ErrFile to fd 2...
I1101 09:37:50.319401  558003 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:37:50.319673  558003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
I1101 09:37:50.320483  558003 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:37:50.320669  558003 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:37:50.321232  558003 cli_runner.go:164] Run: docker container inspect functional-593346 --format={{.State.Status}}
I1101 09:37:50.344745  558003 ssh_runner.go:195] Run: systemctl --version
I1101 09:37:50.344818  558003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-593346
I1101 09:37:50.366753  558003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/functional-593346/id_rsa Username:docker}
I1101 09:37:50.480732  558003 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-593346 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 9d0e6f6199dcb │ 155MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-593346 image ls --format table --alsologtostderr:
I1101 09:37:53.625790  558397 out.go:360] Setting OutFile to fd 1 ...
I1101 09:37:53.626323  558397 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:37:53.626342  558397 out.go:374] Setting ErrFile to fd 2...
I1101 09:37:53.626349  558397 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:37:53.626787  558397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
I1101 09:37:53.627902  558397 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:37:53.628001  558397 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:37:53.628388  558397 cli_runner.go:164] Run: docker container inspect functional-593346 --format={{.State.Status}}
I1101 09:37:53.646392  558397 ssh_runner.go:195] Run: systemctl --version
I1101 09:37:53.646453  558397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-593346
I1101 09:37:53.663530  558397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/functional-593346/id_rsa Username:docker}
I1101 09:37:53.764111  558397 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-593346 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","re
poDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec","repoDigests":["docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58","docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e5
2808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969
449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["re
gistry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080
d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause
:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-593346 image ls --format json --alsologtostderr:
I1101 09:37:53.394416  558344 out.go:360] Setting OutFile to fd 1 ...
I1101 09:37:53.394706  558344 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:37:53.394717  558344 out.go:374] Setting ErrFile to fd 2...
I1101 09:37:53.394721  558344 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:37:53.394962  558344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
I1101 09:37:53.395577  558344 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:37:53.395684  558344 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:37:53.396115  558344 cli_runner.go:164] Run: docker container inspect functional-593346 --format={{.State.Status}}
I1101 09:37:53.413985  558344 ssh_runner.go:195] Run: systemctl --version
I1101 09:37:53.414048  558344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-593346
I1101 09:37:53.431920  558344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/functional-593346/id_rsa Username:docker}
I1101 09:37:53.533212  558344 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-593346 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec
repoDigests:
- docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58
- docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-593346 image ls --format yaml --alsologtostderr:
I1101 09:37:51.417334  558073 out.go:360] Setting OutFile to fd 1 ...
I1101 09:37:51.417583  558073 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:37:51.417592  558073 out.go:374] Setting ErrFile to fd 2...
I1101 09:37:51.417596  558073 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:37:51.417797  558073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
I1101 09:37:51.418399  558073 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:37:51.418493  558073 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:37:51.418882  558073 cli_runner.go:164] Run: docker container inspect functional-593346 --format={{.State.Status}}
I1101 09:37:51.436713  558073 ssh_runner.go:195] Run: systemctl --version
I1101 09:37:51.436780  558073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-593346
I1101 09:37:51.454275  558073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/functional-593346/id_rsa Username:docker}
I1101 09:37:51.558487  558073 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593346 ssh pgrep buildkitd: exit status 1 (285.355021ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image build -t localhost/my-image:functional-593346 testdata/build --alsologtostderr
2025/11/01 09:37:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-593346 image build -t localhost/my-image:functional-593346 testdata/build --alsologtostderr: (3.225874167s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-593346 image build -t localhost/my-image:functional-593346 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b9a92eea9ac
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-593346
--> bca4f57cca5
Successfully tagged localhost/my-image:functional-593346
bca4f57cca58de398568fa302de41f14f3f23d528c05866a450e17fc342c5d0e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-593346 image build -t localhost/my-image:functional-593346 testdata/build --alsologtostderr:
I1101 09:37:51.939745  558252 out.go:360] Setting OutFile to fd 1 ...
I1101 09:37:51.940086  558252 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:37:51.940097  558252 out.go:374] Setting ErrFile to fd 2...
I1101 09:37:51.940101  558252 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:37:51.940317  558252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
I1101 09:37:51.940938  558252 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:37:51.941640  558252 config.go:182] Loaded profile config "functional-593346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:37:51.942116  558252 cli_runner.go:164] Run: docker container inspect functional-593346 --format={{.State.Status}}
I1101 09:37:51.959734  558252 ssh_runner.go:195] Run: systemctl --version
I1101 09:37:51.959784  558252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-593346
I1101 09:37:51.977338  558252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/functional-593346/id_rsa Username:docker}
I1101 09:37:52.078021  558252 build_images.go:162] Building image from path: /tmp/build.3150256699.tar
I1101 09:37:52.078083  558252 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 09:37:52.086900  558252 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3150256699.tar
I1101 09:37:52.091019  558252 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3150256699.tar: stat -c "%s %y" /var/lib/minikube/build/build.3150256699.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3150256699.tar': No such file or directory
I1101 09:37:52.091050  558252 ssh_runner.go:362] scp /tmp/build.3150256699.tar --> /var/lib/minikube/build/build.3150256699.tar (3072 bytes)
I1101 09:37:52.109603  558252 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3150256699
I1101 09:37:52.117912  558252 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3150256699 -xf /var/lib/minikube/build/build.3150256699.tar
I1101 09:37:52.126609  558252 crio.go:315] Building image: /var/lib/minikube/build/build.3150256699
I1101 09:37:52.126681  558252 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-593346 /var/lib/minikube/build/build.3150256699 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1101 09:37:55.083697  558252 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-593346 /var/lib/minikube/build/build.3150256699 --cgroup-manager=cgroupfs: (2.956987657s)
I1101 09:37:55.083778  558252 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3150256699
I1101 09:37:55.092761  558252 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3150256699.tar
I1101 09:37:55.100754  558252 build_images.go:218] Built localhost/my-image:functional-593346 from /tmp/build.3150256699.tar
I1101 09:37:55.100792  558252 build_images.go:134] succeeded building to: functional-593346
I1101 09:37:55.100797  558252 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image ls
E1101 09:39:09.364102  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:25.495315  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:53.205531  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:46:25.495325  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.964985661s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-593346
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-593346 /tmp/TestFunctionalparallelMountCmdVerifyCleanup497280213/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-593346 /tmp/TestFunctionalparallelMountCmdVerifyCleanup497280213/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-593346 /tmp/TestFunctionalparallelMountCmdVerifyCleanup497280213/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-593346 ssh "findmnt -T" /mount1: exit status 1 (352.936075ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:37:41.910480  517687 retry.go:31] will retry after 477.462505ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-593346 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-593346 /tmp/TestFunctionalparallelMountCmdVerifyCleanup497280213/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-593346 /tmp/TestFunctionalparallelMountCmdVerifyCleanup497280213/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-593346 /tmp/TestFunctionalparallelMountCmdVerifyCleanup497280213/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image rm kicbase/echo-server:functional-593346 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-593346 service list: (1.717229732s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-593346 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-593346 service list -o json: (1.715805534s)
functional_test.go:1504: Took "1.715909452s" to run "out/minikube-linux-amd64 -p functional-593346 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-593346
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-593346
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-593346
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (174.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-858454 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m53.292310883s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (174.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-858454 kubectl -- rollout status deployment/busybox: (4.336040222s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-lnfrz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-s2847 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-wnpm6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-lnfrz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-s2847 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-wnpm6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-lnfrz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-s2847 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-wnpm6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-lnfrz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-lnfrz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-s2847 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-s2847 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-wnpm6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 kubectl -- exec busybox-7b57f96db7-wnpm6 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 node add --alsologtostderr -v 5
E1101 09:51:25.495321  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-858454 node add --alsologtostderr -v 5: (56.413141734s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-858454 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp testdata/cp-test.txt ha-858454:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2761731706/001/cp-test_ha-858454.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454:/home/docker/cp-test.txt ha-858454-m02:/home/docker/cp-test_ha-858454_ha-858454-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m02 "sudo cat /home/docker/cp-test_ha-858454_ha-858454-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454:/home/docker/cp-test.txt ha-858454-m03:/home/docker/cp-test_ha-858454_ha-858454-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m03 "sudo cat /home/docker/cp-test_ha-858454_ha-858454-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454:/home/docker/cp-test.txt ha-858454-m04:/home/docker/cp-test_ha-858454_ha-858454-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m04 "sudo cat /home/docker/cp-test_ha-858454_ha-858454-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp testdata/cp-test.txt ha-858454-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2761731706/001/cp-test_ha-858454-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454-m02:/home/docker/cp-test.txt ha-858454:/home/docker/cp-test_ha-858454-m02_ha-858454.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454 "sudo cat /home/docker/cp-test_ha-858454-m02_ha-858454.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454-m02:/home/docker/cp-test.txt ha-858454-m03:/home/docker/cp-test_ha-858454-m02_ha-858454-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m03 "sudo cat /home/docker/cp-test_ha-858454-m02_ha-858454-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454-m02:/home/docker/cp-test.txt ha-858454-m04:/home/docker/cp-test_ha-858454-m02_ha-858454-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m04 "sudo cat /home/docker/cp-test_ha-858454-m02_ha-858454-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp testdata/cp-test.txt ha-858454-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2761731706/001/cp-test_ha-858454-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454-m03:/home/docker/cp-test.txt ha-858454:/home/docker/cp-test_ha-858454-m03_ha-858454.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454 "sudo cat /home/docker/cp-test_ha-858454-m03_ha-858454.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454-m03:/home/docker/cp-test.txt ha-858454-m02:/home/docker/cp-test_ha-858454-m03_ha-858454-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m02 "sudo cat /home/docker/cp-test_ha-858454-m03_ha-858454-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454-m03:/home/docker/cp-test.txt ha-858454-m04:/home/docker/cp-test_ha-858454-m03_ha-858454-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m04 "sudo cat /home/docker/cp-test_ha-858454-m03_ha-858454-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp testdata/cp-test.txt ha-858454-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2761731706/001/cp-test_ha-858454-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454-m04:/home/docker/cp-test.txt ha-858454:/home/docker/cp-test_ha-858454-m04_ha-858454.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454 "sudo cat /home/docker/cp-test_ha-858454-m04_ha-858454.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454-m04:/home/docker/cp-test.txt ha-858454-m02:/home/docker/cp-test_ha-858454-m04_ha-858454-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m02 "sudo cat /home/docker/cp-test_ha-858454-m04_ha-858454-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 cp ha-858454-m04:/home/docker/cp-test.txt ha-858454-m03:/home/docker/cp-test_ha-858454-m04_ha-858454-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 ssh -n ha-858454-m03 "sudo cat /home/docker/cp-test_ha-858454-m04_ha-858454-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-858454 node stop m02 --alsologtostderr -v 5: (12.658596217s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-858454 status --alsologtostderr -v 5: exit status 7 (735.451493ms)

                                                
                                                
-- stdout --
	ha-858454
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-858454-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-858454-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-858454-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:52:03.629112  583066 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:52:03.629458  583066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:52:03.629468  583066 out.go:374] Setting ErrFile to fd 2...
	I1101 09:52:03.629475  583066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:52:03.629787  583066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:52:03.630067  583066 out.go:368] Setting JSON to false
	I1101 09:52:03.630109  583066 mustload.go:66] Loading cluster: ha-858454
	I1101 09:52:03.630220  583066 notify.go:221] Checking for updates...
	I1101 09:52:03.630688  583066 config.go:182] Loaded profile config "ha-858454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:52:03.630719  583066 status.go:174] checking status of ha-858454 ...
	I1101 09:52:03.631435  583066 cli_runner.go:164] Run: docker container inspect ha-858454 --format={{.State.Status}}
	I1101 09:52:03.650530  583066 status.go:371] ha-858454 host status = "Running" (err=<nil>)
	I1101 09:52:03.650556  583066 host.go:66] Checking if "ha-858454" exists ...
	I1101 09:52:03.650855  583066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858454
	I1101 09:52:03.669374  583066 host.go:66] Checking if "ha-858454" exists ...
	I1101 09:52:03.669629  583066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:52:03.669668  583066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858454
	I1101 09:52:03.686901  583066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/ha-858454/id_rsa Username:docker}
	I1101 09:52:03.786197  583066 ssh_runner.go:195] Run: systemctl --version
	I1101 09:52:03.793375  583066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:52:03.807255  583066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:52:03.868881  583066 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:52:03.858232271 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:52:03.869681  583066 kubeconfig.go:125] found "ha-858454" server: "https://192.168.49.254:8443"
	I1101 09:52:03.869732  583066 api_server.go:166] Checking apiserver status ...
	I1101 09:52:03.869788  583066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:52:03.885457  583066 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup
	W1101 09:52:03.894622  583066 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:52:03.894686  583066 ssh_runner.go:195] Run: ls
	I1101 09:52:03.898975  583066 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 09:52:03.903185  583066 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 09:52:03.903211  583066 status.go:463] ha-858454 apiserver status = Running (err=<nil>)
	I1101 09:52:03.903221  583066 status.go:176] ha-858454 status: &{Name:ha-858454 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:52:03.903238  583066 status.go:174] checking status of ha-858454-m02 ...
	I1101 09:52:03.903491  583066 cli_runner.go:164] Run: docker container inspect ha-858454-m02 --format={{.State.Status}}
	I1101 09:52:03.921622  583066 status.go:371] ha-858454-m02 host status = "Stopped" (err=<nil>)
	I1101 09:52:03.921645  583066 status.go:384] host is not running, skipping remaining checks
	I1101 09:52:03.921652  583066 status.go:176] ha-858454-m02 status: &{Name:ha-858454-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:52:03.921679  583066 status.go:174] checking status of ha-858454-m03 ...
	I1101 09:52:03.921975  583066 cli_runner.go:164] Run: docker container inspect ha-858454-m03 --format={{.State.Status}}
	I1101 09:52:03.939978  583066 status.go:371] ha-858454-m03 host status = "Running" (err=<nil>)
	I1101 09:52:03.940006  583066 host.go:66] Checking if "ha-858454-m03" exists ...
	I1101 09:52:03.940284  583066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858454-m03
	I1101 09:52:03.958728  583066 host.go:66] Checking if "ha-858454-m03" exists ...
	I1101 09:52:03.959025  583066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:52:03.959065  583066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858454-m03
	I1101 09:52:03.976567  583066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/ha-858454-m03/id_rsa Username:docker}
	I1101 09:52:04.076962  583066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:52:04.091526  583066 kubeconfig.go:125] found "ha-858454" server: "https://192.168.49.254:8443"
	I1101 09:52:04.091558  583066 api_server.go:166] Checking apiserver status ...
	I1101 09:52:04.091595  583066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:52:04.103696  583066 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1175/cgroup
	W1101 09:52:04.112892  583066 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1175/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:52:04.112952  583066 ssh_runner.go:195] Run: ls
	I1101 09:52:04.117085  583066 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 09:52:04.121494  583066 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 09:52:04.121521  583066 status.go:463] ha-858454-m03 apiserver status = Running (err=<nil>)
	I1101 09:52:04.121535  583066 status.go:176] ha-858454-m03 status: &{Name:ha-858454-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:52:04.121555  583066 status.go:174] checking status of ha-858454-m04 ...
	I1101 09:52:04.121803  583066 cli_runner.go:164] Run: docker container inspect ha-858454-m04 --format={{.State.Status}}
	I1101 09:52:04.141488  583066 status.go:371] ha-858454-m04 host status = "Running" (err=<nil>)
	I1101 09:52:04.141519  583066 host.go:66] Checking if "ha-858454-m04" exists ...
	I1101 09:52:04.141814  583066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858454-m04
	I1101 09:52:04.160952  583066 host.go:66] Checking if "ha-858454-m04" exists ...
	I1101 09:52:04.161236  583066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:52:04.161277  583066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858454-m04
	I1101 09:52:04.179043  583066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/ha-858454-m04/id_rsa Username:docker}
	I1101 09:52:04.279569  583066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:52:04.293103  583066 status.go:176] ha-858454-m04 status: &{Name:ha-858454-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 node start m02 --alsologtostderr -v 5
E1101 09:52:09.891065  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:09.897426  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:09.908879  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:09.930380  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:09.971818  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:10.054133  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:10.216331  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:10.538642  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:11.180630  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:12.462722  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:15.024604  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-858454 node start m02 --alsologtostderr -v 5: (13.94424454s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1101 09:52:20.145990  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (100.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 stop --alsologtostderr -v 5
E1101 09:52:30.388154  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:48.566991  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:52:50.870357  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-858454 stop --alsologtostderr -v 5: (44.531060685s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 start --wait true --alsologtostderr -v 5
E1101 09:53:31.832208  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-858454 start --wait true --alsologtostderr -v 5: (55.741728245s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (100.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-858454 node delete m03 --alsologtostderr -v 5: (9.789763039s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 stop --alsologtostderr -v 5
E1101 09:54:53.757295  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-858454 stop --alsologtostderr -v 5: (41.702981577s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-858454 status --alsologtostderr -v 5: exit status 7 (118.753785ms)

                                                
                                                
-- stdout --
	ha-858454
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-858454-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-858454-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:54:54.425492  597068 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:54:54.425800  597068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:54:54.425811  597068 out.go:374] Setting ErrFile to fd 2...
	I1101 09:54:54.425815  597068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:54:54.426115  597068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 09:54:54.426336  597068 out.go:368] Setting JSON to false
	I1101 09:54:54.426367  597068 mustload.go:66] Loading cluster: ha-858454
	I1101 09:54:54.426497  597068 notify.go:221] Checking for updates...
	I1101 09:54:54.426976  597068 config.go:182] Loaded profile config "ha-858454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:54:54.426999  597068 status.go:174] checking status of ha-858454 ...
	I1101 09:54:54.427572  597068 cli_runner.go:164] Run: docker container inspect ha-858454 --format={{.State.Status}}
	I1101 09:54:54.445948  597068 status.go:371] ha-858454 host status = "Stopped" (err=<nil>)
	I1101 09:54:54.445980  597068 status.go:384] host is not running, skipping remaining checks
	I1101 09:54:54.445989  597068 status.go:176] ha-858454 status: &{Name:ha-858454 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:54:54.446023  597068 status.go:174] checking status of ha-858454-m02 ...
	I1101 09:54:54.446335  597068 cli_runner.go:164] Run: docker container inspect ha-858454-m02 --format={{.State.Status}}
	I1101 09:54:54.463716  597068 status.go:371] ha-858454-m02 host status = "Stopped" (err=<nil>)
	I1101 09:54:54.463763  597068 status.go:384] host is not running, skipping remaining checks
	I1101 09:54:54.463778  597068 status.go:176] ha-858454-m02 status: &{Name:ha-858454-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:54:54.463820  597068 status.go:174] checking status of ha-858454-m04 ...
	I1101 09:54:54.464119  597068 cli_runner.go:164] Run: docker container inspect ha-858454-m04 --format={{.State.Status}}
	I1101 09:54:54.481409  597068 status.go:371] ha-858454-m04 host status = "Stopped" (err=<nil>)
	I1101 09:54:54.481434  597068 status.go:384] host is not running, skipping remaining checks
	I1101 09:54:54.481440  597068 status.go:176] ha-858454-m04 status: &{Name:ha-858454-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (54.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-858454 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (53.808607861s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (54.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (47.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 node add --control-plane --alsologtostderr -v 5
E1101 09:56:25.495527  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-858454 node add --control-plane --alsologtostderr -v 5: (46.485549401s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-858454 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (47.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.06s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-748163 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1101 09:57:09.890411  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-748163 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.060979438s)
--- PASS: TestJSONOutput/start/Command (38.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.13s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-748163 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-748163 --output=json --user=testUser: (6.126735705s)
--- PASS: TestJSONOutput/stop/Command (6.13s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-981282 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-981282 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.754766ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2693b113-36c3-4da6-a9db-0998e1591dcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-981282] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9deb103-5f82-4256-875c-8f9ef6c0d90e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21832"}}
	{"specversion":"1.0","id":"56aca43c-bef9-48bf-8260-2238451ffffe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8d6e780f-d227-4009-afb7-f605bcac2236","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig"}}
	{"specversion":"1.0","id":"8ed5ab08-d073-4b0f-8b8b-ead824872ad9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube"}}
	{"specversion":"1.0","id":"9bc5a164-be54-4b23-9ef1-4e3387f66ac2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"eeaef511-15e8-4a68-af87-b7caad8952c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b0899fca-18f7-4b75-8f00-da33575f5677","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-981282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-981282
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-649371 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-649371 --network=: (35.853053608s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-649371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-649371
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-649371: (2.225269385s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.10s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-688497 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-688497 --network=bridge: (21.539973227s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-688497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-688497
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-688497: (2.034659174s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.59s)

                                                
                                    
x
+
TestKicExistingNetwork (27.93s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1101 09:58:41.791310  517687 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1101 09:58:41.808193  517687 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1101 09:58:41.808281  517687 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1101 09:58:41.808301  517687 cli_runner.go:164] Run: docker network inspect existing-network
W1101 09:58:41.826422  517687 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1101 09:58:41.826455  517687 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1101 09:58:41.826473  517687 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1101 09:58:41.826618  517687 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 09:58:41.843750  517687 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-db3052bfa0e7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:6a:af:78:80:46} reservation:<nil>}
I1101 09:58:41.844160  517687 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b9f150}
I1101 09:58:41.844190  517687 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1101 09:58:41.844236  517687 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1101 09:58:41.899966  517687 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-988326 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-988326 --network=existing-network: (25.762364081s)
helpers_test.go:175: Cleaning up "existing-network-988326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-988326
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-988326: (2.022318466s)
I1101 09:59:09.702581  517687 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (27.93s)

                                                
                                    
x
+
TestKicCustomSubnet (27.35s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-893471 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-893471 --subnet=192.168.60.0/24: (25.146499597s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-893471 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-893471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-893471
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-893471: (2.188755999s)
--- PASS: TestKicCustomSubnet (27.35s)

                                                
                                    
x
+
TestKicStaticIP (23.82s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-293037 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-293037 --static-ip=192.168.200.200: (21.497238374s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-293037 ip
helpers_test.go:175: Cleaning up "static-ip-293037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-293037
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-293037: (2.170269767s)
--- PASS: TestKicStaticIP (23.82s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (47.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-587184 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-587184 --driver=docker  --container-runtime=crio: (20.542591132s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-589496 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-589496 --driver=docker  --container-runtime=crio: (20.436048323s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-587184
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-589496
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-589496" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-589496
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-589496: (2.472576027s)
helpers_test.go:175: Cleaning up "first-587184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-587184
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-587184: (2.438920713s)
--- PASS: TestMinikubeProfile (47.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-418099 --memory=3072 --mount-string /tmp/TestMountStartserial606424119/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-418099 --memory=3072 --mount-string /tmp/TestMountStartserial606424119/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.074980315s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-418099 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-439416 --memory=3072 --mount-string /tmp/TestMountStartserial606424119/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-439416 --memory=3072 --mount-string /tmp/TestMountStartserial606424119/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.648347355s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-439416 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.75s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-418099 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-418099 --alsologtostderr -v=5: (1.751991814s)
--- PASS: TestMountStart/serial/DeleteFirst (1.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-439416 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-439416
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-439416: (1.268066447s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-439416
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-439416: (7.203183109s)
--- PASS: TestMountStart/serial/RestartStopped (8.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-439416 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-955035 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1101 10:01:25.495264  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:02:09.891387  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-955035 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m3.996588971s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-955035 -- rollout status deployment/busybox: (3.150353361s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- exec busybox-7b57f96db7-4frwl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- exec busybox-7b57f96db7-cstzg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- exec busybox-7b57f96db7-4frwl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- exec busybox-7b57f96db7-cstzg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- exec busybox-7b57f96db7-4frwl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- exec busybox-7b57f96db7-cstzg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- exec busybox-7b57f96db7-4frwl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- exec busybox-7b57f96db7-4frwl -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- exec busybox-7b57f96db7-cstzg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-955035 -- exec busybox-7b57f96db7-cstzg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-955035 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-955035 -v=5 --alsologtostderr: (23.569069767s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-955035 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 cp testdata/cp-test.txt multinode-955035:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 cp multinode-955035:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2683752248/001/cp-test_multinode-955035.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 cp multinode-955035:/home/docker/cp-test.txt multinode-955035-m02:/home/docker/cp-test_multinode-955035_multinode-955035-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035-m02 "sudo cat /home/docker/cp-test_multinode-955035_multinode-955035-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 cp multinode-955035:/home/docker/cp-test.txt multinode-955035-m03:/home/docker/cp-test_multinode-955035_multinode-955035-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035-m03 "sudo cat /home/docker/cp-test_multinode-955035_multinode-955035-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 cp testdata/cp-test.txt multinode-955035-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 cp multinode-955035-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2683752248/001/cp-test_multinode-955035-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 cp multinode-955035-m02:/home/docker/cp-test.txt multinode-955035:/home/docker/cp-test_multinode-955035-m02_multinode-955035.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035 "sudo cat /home/docker/cp-test_multinode-955035-m02_multinode-955035.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 cp multinode-955035-m02:/home/docker/cp-test.txt multinode-955035-m03:/home/docker/cp-test_multinode-955035-m02_multinode-955035-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035-m03 "sudo cat /home/docker/cp-test_multinode-955035-m02_multinode-955035-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 cp testdata/cp-test.txt multinode-955035-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 cp multinode-955035-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2683752248/001/cp-test_multinode-955035-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 cp multinode-955035-m03:/home/docker/cp-test.txt multinode-955035:/home/docker/cp-test_multinode-955035-m03_multinode-955035.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035 "sudo cat /home/docker/cp-test_multinode-955035-m03_multinode-955035.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 cp multinode-955035-m03:/home/docker/cp-test.txt multinode-955035-m02:/home/docker/cp-test_multinode-955035-m03_multinode-955035-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 ssh -n multinode-955035-m02 "sudo cat /home/docker/cp-test_multinode-955035-m03_multinode-955035-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-955035 node stop m03: (1.271933982s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-955035 status: exit status 7 (519.693512ms)

                                                
                                                
-- stdout --
	multinode-955035
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-955035-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-955035-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-955035 status --alsologtostderr: exit status 7 (517.367004ms)

                                                
                                                
-- stdout --
	multinode-955035
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-955035-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-955035-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:03:03.991372  656695 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:03:03.991630  656695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:03:03.991639  656695 out.go:374] Setting ErrFile to fd 2...
	I1101 10:03:03.991643  656695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:03:03.991859  656695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:03:03.992055  656695 out.go:368] Setting JSON to false
	I1101 10:03:03.992082  656695 mustload.go:66] Loading cluster: multinode-955035
	I1101 10:03:03.992199  656695 notify.go:221] Checking for updates...
	I1101 10:03:03.992459  656695 config.go:182] Loaded profile config "multinode-955035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:03:03.992477  656695 status.go:174] checking status of multinode-955035 ...
	I1101 10:03:03.992954  656695 cli_runner.go:164] Run: docker container inspect multinode-955035 --format={{.State.Status}}
	I1101 10:03:04.012461  656695 status.go:371] multinode-955035 host status = "Running" (err=<nil>)
	I1101 10:03:04.012492  656695 host.go:66] Checking if "multinode-955035" exists ...
	I1101 10:03:04.012812  656695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-955035
	I1101 10:03:04.030742  656695 host.go:66] Checking if "multinode-955035" exists ...
	I1101 10:03:04.031066  656695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:03:04.031118  656695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-955035
	I1101 10:03:04.049013  656695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/multinode-955035/id_rsa Username:docker}
	I1101 10:03:04.147899  656695 ssh_runner.go:195] Run: systemctl --version
	I1101 10:03:04.154764  656695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:03:04.167796  656695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:03:04.223959  656695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-01 10:03:04.214000879 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:03:04.224567  656695 kubeconfig.go:125] found "multinode-955035" server: "https://192.168.67.2:8443"
	I1101 10:03:04.224612  656695 api_server.go:166] Checking apiserver status ...
	I1101 10:03:04.224656  656695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:03:04.237593  656695 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1259/cgroup
	W1101 10:03:04.246861  656695 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1259/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:03:04.246922  656695 ssh_runner.go:195] Run: ls
	I1101 10:03:04.251288  656695 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1101 10:03:04.255638  656695 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1101 10:03:04.255666  656695 status.go:463] multinode-955035 apiserver status = Running (err=<nil>)
	I1101 10:03:04.255678  656695 status.go:176] multinode-955035 status: &{Name:multinode-955035 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:03:04.255700  656695 status.go:174] checking status of multinode-955035-m02 ...
	I1101 10:03:04.256047  656695 cli_runner.go:164] Run: docker container inspect multinode-955035-m02 --format={{.State.Status}}
	I1101 10:03:04.274787  656695 status.go:371] multinode-955035-m02 host status = "Running" (err=<nil>)
	I1101 10:03:04.274818  656695 host.go:66] Checking if "multinode-955035-m02" exists ...
	I1101 10:03:04.275122  656695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-955035-m02
	I1101 10:03:04.293226  656695 host.go:66] Checking if "multinode-955035-m02" exists ...
	I1101 10:03:04.293547  656695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:03:04.293593  656695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-955035-m02
	I1101 10:03:04.311731  656695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21832-514161/.minikube/machines/multinode-955035-m02/id_rsa Username:docker}
	I1101 10:03:04.411618  656695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:03:04.424959  656695 status.go:176] multinode-955035-m02 status: &{Name:multinode-955035-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:03:04.424995  656695 status.go:174] checking status of multinode-955035-m03 ...
	I1101 10:03:04.425276  656695 cli_runner.go:164] Run: docker container inspect multinode-955035-m03 --format={{.State.Status}}
	I1101 10:03:04.442778  656695 status.go:371] multinode-955035-m03 host status = "Stopped" (err=<nil>)
	I1101 10:03:04.442824  656695 status.go:384] host is not running, skipping remaining checks
	I1101 10:03:04.442847  656695 status.go:176] multinode-955035-m03 status: &{Name:multinode-955035-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-955035 node start m03 -v=5 --alsologtostderr: (6.73290673s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-955035
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-955035
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-955035: (29.518507629s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-955035 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-955035 --wait=true -v=5 --alsologtostderr: (45.163204466s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-955035
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-955035 node delete m03: (4.673601497s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-955035 stop: (28.414172583s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-955035 status: exit status 7 (106.201414ms)

                                                
                                                
-- stdout --
	multinode-955035
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-955035-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-955035 status --alsologtostderr: exit status 7 (105.374663ms)

                                                
                                                
-- stdout --
	multinode-955035
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-955035-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:05:00.589015  666379 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:05:00.589140  666379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:05:00.589149  666379 out.go:374] Setting ErrFile to fd 2...
	I1101 10:05:00.589154  666379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:05:00.589386  666379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:05:00.589583  666379 out.go:368] Setting JSON to false
	I1101 10:05:00.589609  666379 mustload.go:66] Loading cluster: multinode-955035
	I1101 10:05:00.589739  666379 notify.go:221] Checking for updates...
	I1101 10:05:00.590056  666379 config.go:182] Loaded profile config "multinode-955035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:05:00.590074  666379 status.go:174] checking status of multinode-955035 ...
	I1101 10:05:00.590573  666379 cli_runner.go:164] Run: docker container inspect multinode-955035 --format={{.State.Status}}
	I1101 10:05:00.609624  666379 status.go:371] multinode-955035 host status = "Stopped" (err=<nil>)
	I1101 10:05:00.609652  666379 status.go:384] host is not running, skipping remaining checks
	I1101 10:05:00.609659  666379 status.go:176] multinode-955035 status: &{Name:multinode-955035 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:05:00.609685  666379 status.go:174] checking status of multinode-955035-m02 ...
	I1101 10:05:00.610024  666379 cli_runner.go:164] Run: docker container inspect multinode-955035-m02 --format={{.State.Status}}
	I1101 10:05:00.627896  666379 status.go:371] multinode-955035-m02 host status = "Stopped" (err=<nil>)
	I1101 10:05:00.627920  666379 status.go:384] host is not running, skipping remaining checks
	I1101 10:05:00.627926  666379 status.go:176] multinode-955035-m02 status: &{Name:multinode-955035-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (28.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-955035 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-955035 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (28.318391083s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-955035 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (28.94s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-955035
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-955035-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-955035-m02 --driver=docker  --container-runtime=crio: exit status 14 (86.70718ms)

                                                
                                                
-- stdout --
	* [multinode-955035-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-955035-m02' is duplicated with machine name 'multinode-955035-m02' in profile 'multinode-955035'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-955035-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-955035-m03 --driver=docker  --container-runtime=crio: (21.173822132s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-955035
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-955035: exit status 80 (325.469247ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-955035 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-955035-m03 already exists in multinode-955035-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-955035-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-955035-m03: (2.454434796s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.11s)

                                                
                                    
x
+
TestScheduledStopUnix (98.35s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-473081 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-473081 --memory=3072 --driver=docker  --container-runtime=crio: (22.11817614s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-473081 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-473081 -n scheduled-stop-473081
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-473081 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1101 10:13:38.236466  517687 retry.go:31] will retry after 101.975µs: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.237696  517687 retry.go:31] will retry after 79.99µs: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.238892  517687 retry.go:31] will retry after 217.224µs: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.240046  517687 retry.go:31] will retry after 303.932µs: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.241193  517687 retry.go:31] will retry after 263.278µs: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.242309  517687 retry.go:31] will retry after 462.189µs: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.243423  517687 retry.go:31] will retry after 950.258µs: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.244538  517687 retry.go:31] will retry after 1.762427ms: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.246736  517687 retry.go:31] will retry after 3.082516ms: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.249893  517687 retry.go:31] will retry after 3.430298ms: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.254140  517687 retry.go:31] will retry after 7.074076ms: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.261371  517687 retry.go:31] will retry after 4.608382ms: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.266746  517687 retry.go:31] will retry after 14.240897ms: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.282078  517687 retry.go:31] will retry after 12.028745ms: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.294305  517687 retry.go:31] will retry after 31.007779ms: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
I1101 10:13:38.325681  517687 retry.go:31] will retry after 49.396447ms: open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/scheduled-stop-473081/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-473081 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-473081 -n scheduled-stop-473081
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-473081
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-473081 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-473081
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-473081: exit status 7 (84.336619ms)

                                                
                                                
-- stdout --
	scheduled-stop-473081
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-473081 -n scheduled-stop-473081
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-473081 -n scheduled-stop-473081: exit status 7 (87.373434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-473081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-473081
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-473081: (4.623918775s)
--- PASS: TestScheduledStopUnix (98.35s)

                                                
                                    
x
+
TestInsufficientStorage (9.98s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-500399 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-500399 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.395295398s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0cdd50e3-e397-44cb-934e-c46a626d28d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-500399] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c6f96bdb-1d97-4bf0-8b29-ac1127a9c9cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21832"}}
	{"specversion":"1.0","id":"d797d1d9-40a1-45eb-9cd6-bbec1436c20b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"35971495-37ee-46e7-a134-c0afde3f07ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig"}}
	{"specversion":"1.0","id":"e47f92cd-3f50-470f-b9e5-4ac9d9c7159f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube"}}
	{"specversion":"1.0","id":"5201fef7-3667-4114-8d2d-9fd52b2438f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fd674fb5-a3fa-4f88-9cdf-c36b3bff76c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2533fdab-3dff-47f6-93ff-fb7faff2b6f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d4697017-6fef-469b-82a6-67d8c5897266","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c48a2302-173c-44a5-a51a-24eb6c2fddba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"05bfbae2-8c5c-4b24-ad3e-858370e3bafb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"259a6fac-eacc-4124-9b47-4c8d90a66ebf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-500399\" primary control-plane node in \"insufficient-storage-500399\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5797e6d6-9c92-4b52-84d6-0162a65d20c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8badfa2-14e0-431f-966c-cf29fee1d49e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"51938ea2-6435-45d6-b41c-503f9d313c19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-500399 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-500399 --output=json --layout=cluster: exit status 7 (302.630622ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-500399","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-500399","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 10:15:01.669333  687995 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-500399" does not appear in /home/jenkins/minikube-integration/21832-514161/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-500399 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-500399 --output=json --layout=cluster: exit status 7 (311.758846ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-500399","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-500399","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 10:15:01.981306  688108 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-500399" does not appear in /home/jenkins/minikube-integration/21832-514161/kubeconfig
	E1101 10:15:01.992488  688108 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/insufficient-storage-500399/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-500399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-500399
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-500399: (1.973735077s)
--- PASS: TestInsufficientStorage (9.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (56.21s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1881179092 start -p running-upgrade-821146 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1881179092 start -p running-upgrade-821146 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.656558138s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-821146 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-821146 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.65211355s)
helpers_test.go:175: Cleaning up "running-upgrade-821146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-821146
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-821146: (2.83137672s)
--- PASS: TestRunningBinaryUpgrade (56.21s)

                                                
                                    
x
+
TestKubernetesUpgrade (302.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1101 10:17:09.892089  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.189990084s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-949166
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-949166: (2.455922154s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-949166 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-949166 status --format={{.Host}}: exit status 7 (99.514176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.694407672s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-949166 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (99.098509ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-949166] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-949166
	    minikube start -p kubernetes-upgrade-949166 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9491662 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-949166 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-949166 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.88088593s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-949166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-949166
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-949166: (2.567971044s)
--- PASS: TestKubernetesUpgrade (302.06s)

                                                
                                    
x
+
TestMissingContainerUpgrade (105.11s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.528156646 start -p missing-upgrade-489499 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.528156646 start -p missing-upgrade-489499 --memory=3072 --driver=docker  --container-runtime=crio: (56.326633978s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-489499
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-489499: (2.30578593s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-489499
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-489499 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-489499 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.478327645s)
helpers_test.go:175: Cleaning up "missing-upgrade-489499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-489499
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-489499: (2.782847746s)
--- PASS: TestMissingContainerUpgrade (105.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.05s)

                                                
                                    
x
+
TestPause/serial/Start (63.56s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-297661 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-297661 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m3.562368071s)
--- PASS: TestPause/serial/Start (63.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (73.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2089422694 start -p stopped-upgrade-333944 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2089422694 start -p stopped-upgrade-333944 --memory=3072 --vm-driver=docker  --container-runtime=crio: (56.134701733s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2089422694 -p stopped-upgrade-333944 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2089422694 -p stopped-upgrade-333944 stop: (2.899559975s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-333944 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-333944 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.695281562s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (73.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.57s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-297661 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-297661 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.558386349s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-333944
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-333944: (1.235637046s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194729 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-194729 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (100.482691ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-194729] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194729 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1101 10:16:25.494807  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/addons-050432/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-194729 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.373890819s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-194729 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-456743 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-456743 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (210.063948ms)

                                                
                                                
-- stdout --
	* [false-456743] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:16:53.848006  719344 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:16:53.848151  719344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:53.848163  719344 out.go:374] Setting ErrFile to fd 2...
	I1101 10:16:53.848169  719344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:53.848406  719344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-514161/.minikube/bin
	I1101 10:16:53.848976  719344 out.go:368] Setting JSON to false
	I1101 10:16:53.850137  719344 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10751,"bootTime":1761981463,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:16:53.850257  719344 start.go:143] virtualization: kvm guest
	I1101 10:16:53.852917  719344 out.go:179] * [false-456743] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:16:53.855575  719344 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:16:53.855619  719344 notify.go:221] Checking for updates...
	I1101 10:16:53.857557  719344 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:16:53.858648  719344 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-514161/kubeconfig
	I1101 10:16:53.859942  719344 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-514161/.minikube
	I1101 10:16:53.861519  719344 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:16:53.862605  719344 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:16:53.864250  719344 config.go:182] Loaded profile config "NoKubernetes-194729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:16:53.864407  719344 config.go:182] Loaded profile config "cert-options-278823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:16:53.864550  719344 config.go:182] Loaded profile config "force-systemd-env-482102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:16:53.864705  719344 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:16:53.893292  719344 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:16:53.893446  719344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:16:53.965764  719344 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:80 SystemTime:2025-11-01 10:16:53.954579156 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:16:53.965950  719344 docker.go:319] overlay module found
	I1101 10:16:53.968392  719344 out.go:179] * Using the docker driver based on user configuration
	I1101 10:16:53.969761  719344 start.go:309] selected driver: docker
	I1101 10:16:53.969781  719344 start.go:930] validating driver "docker" against <nil>
	I1101 10:16:53.969793  719344 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:16:53.971698  719344 out.go:203] 
	W1101 10:16:53.972953  719344 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 10:16:53.977274  719344 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-456743 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-456743

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-456743

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-456743

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-456743

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-456743

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-456743

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-456743

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-456743

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-456743

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-456743

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-456743

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-456743" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-456743" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:16:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-194729
contexts:
- context:
cluster: NoKubernetes-194729
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:16:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-194729
name: NoKubernetes-194729
current-context: NoKubernetes-194729
kind: Config
users:
- name: NoKubernetes-194729
user:
client-certificate: /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/NoKubernetes-194729/client.crt
client-key: /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/NoKubernetes-194729/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-456743

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-456743"

                                                
                                                
----------------------- debugLogs end: false-456743 [took: 5.769265896s] --------------------------------
helpers_test.go:175: Cleaning up "false-456743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-456743
--- PASS: TestNetworkPlugins/group/false (6.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (31.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194729 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-194729 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.817559882s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-194729 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-194729 status -o json: exit status 2 (340.939423ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-194729","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-194729
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-194729: (3.188665738s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (31.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194729 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-194729 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.259914944s)
--- PASS: TestNoKubernetes/serial/Start (6.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-194729 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-194729 "sudo systemctl is-active --quiet service kubelet": exit status 1 (331.986585ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-194729
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-194729: (1.292717708s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194729 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-194729 --driver=docker  --container-runtime=crio: (7.770258257s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-194729 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-194729 "sudo systemctl is-active --quiet service kubelet": exit status 1 (327.888147ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.870750094s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.560433335s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-556573 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [44ef04e3-c9bd-4265-88b9-680b1e522491] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [44ef04e3-c9bd-4265-88b9-680b1e522491] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002936956s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-556573 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-680879 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f829de8a-1e4a-4549-8dea-1e345dc87d58] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f829de8a-1e4a-4549-8dea-1e345dc87d58] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003971112s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-680879 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-556573 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-556573 --alsologtostderr -v=3: (16.204561646s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-680879 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-680879 --alsologtostderr -v=3: (16.268655531s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556573 -n old-k8s-version-556573
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556573 -n old-k8s-version-556573: exit status 7 (86.047638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-556573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-556573 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (44.402532025s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556573 -n old-k8s-version-556573
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680879 -n no-preload-680879
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680879 -n no-preload-680879: exit status 7 (99.428065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-680879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (46.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-680879 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.427306622s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680879 -n no-preload-680879
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (46.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wrwks" [5b1c4fe0-25e6-40ca-989f-123a98c5db4c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003876587s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wrwks" [5b1c4fe0-25e6-40ca-989f-123a98c5db4c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003275101s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-556573 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6hkgl" [f7ef4e23-14fd-41d1-a72b-4107d31b74a9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003876235s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-556573 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6hkgl" [f7ef4e23-14fd-41d1-a72b-4107d31b74a9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00366646s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-680879 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-680879 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-678014 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-678014 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m11.396511905s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-535119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-535119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.426630419s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-006653 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-006653 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (27.400374071s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-006653 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-006653 --alsologtostderr -v=3: (2.497183043s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-535119 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cae18218-eb25-4d8d-ba04-f9e73dda2131] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cae18218-eb25-4d8d-ba04-f9e73dda2131] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00506181s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-535119 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-006653 -n newest-cni-006653
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-006653 -n newest-cni-006653: exit status 7 (95.848956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-006653 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-006653 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-006653 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.871933004s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-006653 -n newest-cni-006653
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-535119 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-535119 --alsologtostderr -v=3: (18.164981562s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-006653 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-678014 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fcbbe122-495c-462f-913f-f3f2b1b23890] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fcbbe122-495c-462f-913f-f3f2b1b23890] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005184121s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-678014 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.02815838s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-535119 -n default-k8s-diff-port-535119
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-535119 -n default-k8s-diff-port-535119: exit status 7 (109.32764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-535119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-535119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-535119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.083117606s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-535119 -n default-k8s-diff-port-535119
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-678014 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-678014 --alsologtostderr -v=3: (16.765752522s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-678014 -n embed-certs-678014
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-678014 -n embed-certs-678014: exit status 7 (91.131034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-678014 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-678014 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-678014 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.061442607s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-678014 -n embed-certs-678014
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (45.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (45.561220033s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (45.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-456743 "pgrep -a kubelet"
I1101 10:22:09.359184  517687 config.go:182] Loaded profile config "auto-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-456743 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mmd2z" [0ef80b5a-af3f-4366-9fbf-e6eb8fd43e03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 10:22:09.891121  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/functional-593346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-mmd2z" [0ef80b5a-af3f-4366-9fbf-e6eb8fd43e03] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003571469s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-456743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mgn6f" [a30b417a-4ca6-4777-bc88-ab26ae34fe87] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004411226s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mgn6f" [a30b417a-4ca6-4777-bc88-ab26ae34fe87] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004786586s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-535119 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-535119 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (54.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (54.823145644s)
--- PASS: TestNetworkPlugins/group/calico/Start (54.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cpmxg" [6d549260-f10c-4681-8da0-9ae59df674d3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004090202s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cpmxg" [6d549260-f10c-4681-8da0-9ae59df674d3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004458098s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-678014 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (54.28863262s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-678014 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-xnxjl" [0944d95b-edc1-40ba-af41-8197fa822359] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004734074s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-456743 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-456743 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sbz9v" [e1f02d0f-8399-4680-bba1-924b0da2e3a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sbz9v" [e1f02d0f-8399-4680-bba1-924b0da2e3a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006860926s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m6.916518406s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-456743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (49.797961916s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-txnvf" [d6338a72-53a1-427e-9809-feabebdc61ee] Running
E1101 10:23:39.659830  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:39.666258  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:39.678389  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:39.700563  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:39.742068  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:39.823559  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:39.985216  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004713166s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-456743 "pgrep -a kubelet"
E1101 10:23:40.306954  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1101 10:23:40.504815  517687 config.go:182] Loaded profile config "calico-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-456743 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vkp78" [2350e530-a134-48aa-b80e-b8abefa17944] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 10:23:40.948334  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:42.230197  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-vkp78" [2350e530-a134-48aa-b80e-b8abefa17944] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.004551523s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-456743 "pgrep -a kubelet"
I1101 10:23:44.733726  517687 config.go:182] Loaded profile config "custom-flannel-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-456743 replace --force -f testdata/netcat-deployment.yaml
E1101 10:23:44.792296  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/old-k8s-version-556573/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gjthm" [3aa83be8-34ce-4ddc-a9f2-26a45ce7b1a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 10:23:45.965009  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:45.971408  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:45.982868  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:46.004327  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:46.045899  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:46.127433  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:46.289104  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:46.611051  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:47.253313  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-gjthm" [3aa83be8-34ce-4ddc-a9f2-26a45ce7b1a7] Running
E1101 10:23:48.534989  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004387302s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-456743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-456743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-456743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (42.884147191s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-456743 "pgrep -a kubelet"
I1101 10:24:11.430440  517687 config.go:182] Loaded profile config "enable-default-cni-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-456743 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2g5ww" [d4d97920-32a3-4c8a-b945-6b509a9eff94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2g5ww" [d4d97920-32a3-4c8a-b945-6b509a9eff94] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003660705s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-456743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-p4c97" [dfcaa004-773a-400a-a6b2-11d183daa9d8] Running
E1101 10:24:26.943642  517687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/no-preload-680879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003872114s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-456743 "pgrep -a kubelet"
I1101 10:24:29.434338  517687 config.go:182] Loaded profile config "flannel-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-456743 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8pcgs" [7216f9d6-9b94-4fff-b800-525789467580] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8pcgs" [7216f9d6-9b94-4fff-b800-525789467580] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004394725s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-456743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-456743 "pgrep -a kubelet"
I1101 10:24:54.166108  517687 config.go:182] Loaded profile config "bridge-456743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-456743 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nzc96" [ac6fc366-c175-4543-bbb3-b1c13b7e725c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nzc96" [ac6fc366-c175-4543-bbb3-b1c13b7e725c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004891428s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-456743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-456743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-083568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-083568
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-456743 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-456743

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-456743

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-456743

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-456743

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-456743

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-456743

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-456743

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-456743

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-456743

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-456743

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-456743

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-456743" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-456743" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:16:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-env-482102
contexts:
- context:
cluster: force-systemd-env-482102
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:16:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-env-482102
name: force-systemd-env-482102
current-context: force-systemd-env-482102
kind: Config
users:
- name: force-systemd-env-482102
user:
client-certificate: /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/force-systemd-env-482102/client.crt
client-key: /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/force-systemd-env-482102/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-456743

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-456743"

                                                
                                                
----------------------- debugLogs end: kubenet-456743 [took: 4.505068946s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-456743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-456743
--- SKIP: TestNetworkPlugins/group/kubenet (4.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-456743 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-456743" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21832-514161/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:16:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-194729
contexts:
- context:
cluster: NoKubernetes-194729
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:16:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-194729
name: NoKubernetes-194729
current-context: NoKubernetes-194729
kind: Config
users:
- name: NoKubernetes-194729
user:
client-certificate: /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/NoKubernetes-194729/client.crt
client-key: /home/jenkins/minikube-integration/21832-514161/.minikube/profiles/NoKubernetes-194729/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-456743

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-456743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456743"

                                                
                                                
----------------------- debugLogs end: cilium-456743 [took: 5.892657572s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-456743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-456743
--- SKIP: TestNetworkPlugins/group/cilium (6.08s)

                                                
                                    
Copied to clipboard